00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1910 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3171 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.160 Fetching changes from the remote Git repository 00:00:00.161 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.287 > git --version # 'git version 2.39.2' 00:00:00.287 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.320 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.320 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.177 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.189 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.200 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:06.200 > git config core.sparsecheckout # timeout=10 00:00:06.210 > git read-tree -mu HEAD # timeout=10 00:00:06.225 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:06.243 Commit message: "pool: fixes for VisualBuild class" 00:00:06.243 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:06.335 [Pipeline] Start of Pipeline 00:00:06.350 [Pipeline] library 00:00:06.352 Loading library shm_lib@master 00:00:06.352 Library shm_lib@master is cached. Copying from home. 00:00:06.370 [Pipeline] node 00:00:06.385 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:06.387 [Pipeline] { 00:00:06.400 [Pipeline] catchError 00:00:06.402 [Pipeline] { 00:00:06.417 [Pipeline] wrap 00:00:06.429 [Pipeline] { 00:00:06.437 [Pipeline] stage 00:00:06.438 [Pipeline] { (Prologue) 00:00:06.461 [Pipeline] echo 00:00:06.462 Node: VM-host-SM9 00:00:06.468 [Pipeline] cleanWs 00:00:06.477 [WS-CLEANUP] Deleting project workspace... 00:00:06.477 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.483 [WS-CLEANUP] done 00:00:06.684 [Pipeline] setCustomBuildProperty 00:00:06.746 [Pipeline] nodesByLabel 00:00:06.747 Found a total of 2 nodes with the 'sorcerer' label 00:00:06.754 [Pipeline] httpRequest 00:00:06.758 HttpMethod: GET 00:00:06.758 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:06.767 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:06.769 Response Code: HTTP/1.1 200 OK 00:00:06.769 Success: Status code 200 is in the accepted range: 200,404 00:00:06.770 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.718 [Pipeline] sh 00:00:07.999 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:08.018 [Pipeline] httpRequest 00:00:08.022 HttpMethod: GET 00:00:08.023 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:08.023 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:08.025 Response Code: HTTP/1.1 200 OK 00:00:08.025 Success: Status code 200 is in the accepted range: 200,404 00:00:08.025 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:26.783 [Pipeline] sh 00:00:27.062 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:29.605 [Pipeline] sh 00:00:29.885 + git -C spdk log --oneline -n5 00:00:29.885 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:00:29.885 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:00:29.885 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:29.885 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:00:29.885 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:00:29.904 [Pipeline] writeFile 00:00:29.920 [Pipeline] sh 00:00:30.201 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.213 [Pipeline] sh 00:00:30.494 + cat autorun-spdk.conf 00:00:30.494 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.494 SPDK_TEST_NVME=1 00:00:30.494 SPDK_TEST_FTL=1 00:00:30.494 SPDK_TEST_ISAL=1 00:00:30.494 SPDK_RUN_ASAN=1 00:00:30.494 SPDK_RUN_UBSAN=1 00:00:30.494 SPDK_TEST_XNVME=1 00:00:30.494 SPDK_TEST_NVME_FDP=1 00:00:30.494 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.501 RUN_NIGHTLY=1 00:00:30.503 [Pipeline] } 00:00:30.522 [Pipeline] // stage 00:00:30.538 [Pipeline] stage 00:00:30.540 [Pipeline] { (Run VM) 00:00:30.556 [Pipeline] sh 00:00:30.837 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:30.837 + echo 'Start stage prepare_nvme.sh' 00:00:30.837 Start stage prepare_nvme.sh 00:00:30.837 + [[ -n 5 ]] 00:00:30.837 + disk_prefix=ex5 00:00:30.837 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:30.837 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:30.838 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:30.838 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.838 ++ SPDK_TEST_NVME=1 00:00:30.838 ++ SPDK_TEST_FTL=1 00:00:30.838 ++ SPDK_TEST_ISAL=1 00:00:30.838 ++ SPDK_RUN_ASAN=1 00:00:30.838 ++ SPDK_RUN_UBSAN=1 00:00:30.838 ++ SPDK_TEST_XNVME=1 00:00:30.838 ++ SPDK_TEST_NVME_FDP=1 00:00:30.838 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.838 ++ RUN_NIGHTLY=1 00:00:30.838 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:30.838 + nvme_files=() 00:00:30.838 + declare -A nvme_files 00:00:30.838 + backend_dir=/var/lib/libvirt/images/backends 00:00:30.838 + nvme_files['nvme.img']=5G 00:00:30.838 + nvme_files['nvme-cmb.img']=5G 00:00:30.838 + nvme_files['nvme-multi0.img']=4G 00:00:30.838 + nvme_files['nvme-multi1.img']=4G 00:00:30.838 + nvme_files['nvme-multi2.img']=4G 00:00:30.838 + nvme_files['nvme-openstack.img']=8G 00:00:30.838 + nvme_files['nvme-zns.img']=5G 00:00:30.838 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:30.838 + (( SPDK_TEST_FTL == 1 )) 00:00:30.838 + nvme_files["nvme-ftl.img"]=6G 00:00:30.838 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:30.838 + nvme_files["nvme-fdp.img"]=1G 00:00:30.838 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:30.838 + for nvme in "${!nvme_files[@]}" 00:00:30.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:30.838 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.838 + for nvme in "${!nvme_files[@]}" 00:00:30.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:00:31.097 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:31.097 + for nvme in "${!nvme_files[@]}" 00:00:31.097 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:31.097 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.097 + for nvme in "${!nvme_files[@]}" 00:00:31.097 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:31.097 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.097 + for nvme in "${!nvme_files[@]}" 00:00:31.097 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:31.355 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.355 + for nvme in "${!nvme_files[@]}" 00:00:31.355 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:31.356 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.356 + for nvme in "${!nvme_files[@]}" 00:00:31.356 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:31.356 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.356 + for nvme in "${!nvme_files[@]}" 00:00:31.356 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:00:31.614 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:31.614 + for nvme in "${!nvme_files[@]}" 00:00:31.614 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:31.614 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.614 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:31.614 + echo 'End stage prepare_nvme.sh' 00:00:31.614 End stage prepare_nvme.sh 00:00:31.627 [Pipeline] sh 00:00:31.908 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:31.908 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:31.908 00:00:31.908 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:31.908 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:31.908 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:31.908 HELP=0 00:00:31.908 DRY_RUN=0 00:00:31.908 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:00:31.908 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:31.908 NVME_AUTO_CREATE=0 00:00:31.908 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:00:31.908 NVME_CMB=,,,, 00:00:31.908 NVME_PMR=,,,, 00:00:31.908 NVME_ZNS=,,,, 00:00:31.908 NVME_MS=true,,,, 00:00:31.908 NVME_FDP=,,,on, 00:00:31.908 SPDK_VAGRANT_DISTRO=fedora38 00:00:31.908 SPDK_VAGRANT_VMCPU=10 00:00:31.908 SPDK_VAGRANT_VMRAM=12288 00:00:31.908 SPDK_VAGRANT_PROVIDER=libvirt 00:00:31.908 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:31.908 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:31.908 SPDK_OPENSTACK_NETWORK=0 00:00:31.908 VAGRANT_PACKAGE_BOX=0 00:00:31.908 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:31.908 FORCE_DISTRO=true 00:00:31.908 VAGRANT_BOX_VERSION= 00:00:31.908 EXTRA_VAGRANTFILES= 00:00:31.908 NIC_MODEL=e1000 00:00:31.908 00:00:31.908 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:00:31.908 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:35.195 Bringing machine 'default' up with 'libvirt' provider... 00:00:35.762 ==> default: Creating image (snapshot of base box volume). 00:00:36.021 ==> default: Creating domain with the following settings... 00:00:36.021 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718012309_eeeb0d881aef300f99c0 00:00:36.021 ==> default: -- Domain type: kvm 00:00:36.021 ==> default: -- Cpus: 10 00:00:36.021 ==> default: -- Feature: acpi 00:00:36.021 ==> default: -- Feature: apic 00:00:36.021 ==> default: -- Feature: pae 00:00:36.021 ==> default: -- Memory: 12288M 00:00:36.021 ==> default: -- Memory Backing: hugepages: 00:00:36.021 ==> default: -- Management MAC: 00:00:36.021 ==> default: -- Loader: 00:00:36.021 ==> default: -- Nvram: 00:00:36.021 ==> default: -- Base box: spdk/fedora38 00:00:36.021 ==> default: -- Storage pool: default 00:00:36.021 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718012309_eeeb0d881aef300f99c0.img (20G) 00:00:36.021 ==> default: -- Volume Cache: default 00:00:36.021 ==> default: -- Kernel: 00:00:36.021 ==> default: -- Initrd: 00:00:36.021 ==> default: -- Graphics Type: vnc 00:00:36.021 ==> default: -- Graphics Port: -1 00:00:36.021 ==> default: -- Graphics IP: 127.0.0.1 00:00:36.021 ==> default: -- Graphics Password: Not defined 00:00:36.021 ==> default: -- Video Type: cirrus 00:00:36.021 ==> default: -- Video VRAM: 9216 00:00:36.021 ==> default: -- Sound Type: 00:00:36.021 ==> default: -- Keymap: en-us 00:00:36.021 ==> default: -- TPM Path: 00:00:36.021 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:36.021 ==> default: -- Command line args: 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:36.021 ==> default: -> value=-drive, 00:00:36.021 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:36.021 ==> default: -> value=-drive, 00:00:36.021 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:00:36.021 ==> default: -> value=-drive, 00:00:36.021 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.021 ==> default: -> value=-drive, 00:00:36.021 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.021 ==> default: -> value=-drive, 00:00:36.021 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:00:36.021 ==> default: -> value=-drive, 00:00:36.021 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:36.021 ==> default: -> value=-device, 00:00:36.021 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.021 ==> default: Creating shared folders metadata... 00:00:36.021 ==> default: Starting domain. 00:00:37.410 ==> default: Waiting for domain to get an IP address... 00:00:55.503 ==> default: Waiting for SSH to become available... 00:00:55.503 ==> default: Configuring and enabling network interfaces... 00:00:58.037 default: SSH address: 192.168.121.190:22 00:00:58.037 default: SSH username: vagrant 00:00:58.037 default: SSH auth method: private key 00:00:59.940 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:08.055 ==> default: Mounting SSHFS shared folder... 00:01:08.991 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:08.991 ==> default: Checking Mount.. 00:01:09.927 ==> default: Folder Successfully Mounted! 00:01:09.927 ==> default: Running provisioner: file... 00:01:10.866 default: ~/.gitconfig => .gitconfig 00:01:11.125 00:01:11.125 SUCCESS! 00:01:11.125 00:01:11.125 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:11.125 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:11.125 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:11.125 00:01:11.134 [Pipeline] } 00:01:11.153 [Pipeline] // stage 00:01:11.163 [Pipeline] dir 00:01:11.163 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:01:11.165 [Pipeline] { 00:01:11.179 [Pipeline] catchError 00:01:11.181 [Pipeline] { 00:01:11.194 [Pipeline] sh 00:01:11.469 + vagrant ssh-config --host vagrant 00:01:11.469 + sed -ne /^Host/,$p 00:01:11.469 + tee ssh_conf 00:01:14.756 Host vagrant 00:01:14.756 HostName 192.168.121.190 00:01:14.756 User vagrant 00:01:14.756 Port 22 00:01:14.756 UserKnownHostsFile /dev/null 00:01:14.756 StrictHostKeyChecking no 00:01:14.756 PasswordAuthentication no 00:01:14.756 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:14.756 IdentitiesOnly yes 00:01:14.756 LogLevel FATAL 00:01:14.756 ForwardAgent yes 00:01:14.756 ForwardX11 yes 00:01:14.756 00:01:14.770 [Pipeline] withEnv 00:01:14.772 [Pipeline] { 00:01:14.788 [Pipeline] sh 00:01:15.067 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:15.067 source /etc/os-release 00:01:15.067 [[ -e /image.version ]] && img=$(< /image.version) 00:01:15.067 # Minimal, systemd-like check. 00:01:15.067 if [[ -e /.dockerenv ]]; then 00:01:15.067 # Clear garbage from the node's name: 00:01:15.067 # agt-er_autotest_547-896 -> autotest_547-896 00:01:15.067 # $HOSTNAME is the actual container id 00:01:15.067 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:15.067 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:15.067 # We can assume this is a mount from a host where container is running, 00:01:15.067 # so fetch its hostname to easily identify the target swarm worker. 00:01:15.067 container="$(< /etc/hostname) ($agent)" 00:01:15.067 else 00:01:15.067 # Fallback 00:01:15.067 container=$agent 00:01:15.067 fi 00:01:15.067 fi 00:01:15.067 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:15.067 00:01:15.337 [Pipeline] } 00:01:15.358 [Pipeline] // withEnv 00:01:15.366 [Pipeline] setCustomBuildProperty 00:01:15.381 [Pipeline] stage 00:01:15.383 [Pipeline] { (Tests) 00:01:15.400 [Pipeline] sh 00:01:15.713 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:15.725 [Pipeline] sh 00:01:16.002 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:16.275 [Pipeline] timeout 00:01:16.276 Timeout set to expire in 40 min 00:01:16.278 [Pipeline] { 00:01:16.293 [Pipeline] sh 00:01:16.567 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:17.135 HEAD is now at 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:17.148 [Pipeline] sh 00:01:17.424 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:17.695 [Pipeline] sh 00:01:17.974 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:18.250 [Pipeline] sh 00:01:18.532 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:18.791 ++ readlink -f spdk_repo 00:01:18.791 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:18.791 + [[ -n /home/vagrant/spdk_repo ]] 00:01:18.791 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:18.791 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:18.791 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:18.791 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:18.791 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:18.791 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:18.791 + cd /home/vagrant/spdk_repo 00:01:18.791 + source /etc/os-release 00:01:18.791 ++ NAME='Fedora Linux' 00:01:18.791 ++ VERSION='38 (Cloud Edition)' 00:01:18.791 ++ ID=fedora 00:01:18.791 ++ VERSION_ID=38 00:01:18.791 ++ VERSION_CODENAME= 00:01:18.791 ++ PLATFORM_ID=platform:f38 00:01:18.791 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:18.791 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.791 ++ LOGO=fedora-logo-icon 00:01:18.791 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:18.791 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.791 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:18.791 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.791 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.791 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.791 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:18.791 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.791 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:18.791 ++ SUPPORT_END=2024-05-14 00:01:18.791 ++ VARIANT='Cloud Edition' 00:01:18.791 ++ VARIANT_ID=cloud 00:01:18.791 + uname -a 00:01:18.792 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:18.792 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:18.792 Hugepages 00:01:18.792 node hugesize free / total 00:01:18.792 node0 1048576kB 0 / 0 00:01:19.051 node0 2048kB 0 / 0 00:01:19.051 00:01:19.051 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.051 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:19.051 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:19.051 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:19.051 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:19.051 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3c3n1 00:01:19.051 + rm -f /tmp/spdk-ld-path 00:01:19.051 + source autorun-spdk.conf 00:01:19.051 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.051 ++ SPDK_TEST_NVME=1 00:01:19.051 ++ SPDK_TEST_FTL=1 00:01:19.051 ++ SPDK_TEST_ISAL=1 00:01:19.051 ++ SPDK_RUN_ASAN=1 00:01:19.051 ++ SPDK_RUN_UBSAN=1 00:01:19.051 ++ SPDK_TEST_XNVME=1 00:01:19.051 ++ SPDK_TEST_NVME_FDP=1 00:01:19.051 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.051 ++ RUN_NIGHTLY=1 00:01:19.051 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.051 + [[ -n '' ]] 00:01:19.051 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:19.051 + for M in /var/spdk/build-*-manifest.txt 00:01:19.051 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.051 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.051 + for M in /var/spdk/build-*-manifest.txt 00:01:19.051 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.051 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.310 ++ uname 00:01:19.310 + [[ Linux == \L\i\n\u\x ]] 00:01:19.310 + sudo dmesg -T 00:01:19.310 + sudo dmesg --clear 00:01:19.310 + dmesg_pid=5192 00:01:19.310 + sudo dmesg -Tw 00:01:19.310 + [[ Fedora Linux == FreeBSD ]] 00:01:19.310 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.310 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.310 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.310 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.310 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.310 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.310 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.310 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.310 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.310 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.310 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.310 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.310 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.310 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.310 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.310 Test configuration: 00:01:19.310 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.310 SPDK_TEST_NVME=1 00:01:19.310 SPDK_TEST_FTL=1 00:01:19.310 SPDK_TEST_ISAL=1 00:01:19.310 SPDK_RUN_ASAN=1 00:01:19.310 SPDK_RUN_UBSAN=1 00:01:19.310 SPDK_TEST_XNVME=1 00:01:19.310 SPDK_TEST_NVME_FDP=1 00:01:19.310 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.310 RUN_NIGHTLY=1 09:39:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:19.310 09:39:12 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.310 09:39:12 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.310 09:39:12 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.311 09:39:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.311 09:39:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.311 09:39:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.311 09:39:12 -- paths/export.sh@5 -- $ export PATH 00:01:19.311 09:39:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.311 09:39:12 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:19.311 09:39:12 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:19.311 09:39:12 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718012352.XXXXXX 00:01:19.311 09:39:12 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718012352.xZJtuv 00:01:19.311 09:39:12 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:19.311 09:39:12 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:19.311 09:39:12 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:19.311 09:39:12 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:19.311 09:39:12 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.311 09:39:12 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:19.311 09:39:12 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:19.311 09:39:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.311 09:39:12 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:19.311 09:39:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.311 09:39:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.311 09:39:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:19.311 09:39:12 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.311 Mon Jun 10 09:39:12 AM UTC 2024 00:01:19.311 09:39:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.311 LTS-43-g130b9406a 00:01:19.311 09:39:13 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:19.311 09:39:13 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:19.311 09:39:13 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:19.311 09:39:13 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:19.311 09:39:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.311 ************************************ 00:01:19.311 START TEST asan 00:01:19.311 ************************************ 00:01:19.311 using asan 00:01:19.311 09:39:13 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:19.311 00:01:19.311 real 0m0.000s 00:01:19.311 user 0m0.000s 00:01:19.311 sys 0m0.000s 00:01:19.311 09:39:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:19.311 09:39:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.311 ************************************ 00:01:19.311 END TEST asan 00:01:19.311 ************************************ 00:01:19.311 09:39:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.311 09:39:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.311 09:39:13 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:19.311 09:39:13 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:19.311 09:39:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.311 ************************************ 00:01:19.311 START TEST ubsan 00:01:19.311 ************************************ 00:01:19.311 using ubsan 00:01:19.311 09:39:13 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:19.311 00:01:19.311 real 0m0.000s 00:01:19.311 user 0m0.000s 00:01:19.311 sys 0m0.000s 00:01:19.311 09:39:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:19.311 ************************************ 00:01:19.311 09:39:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.311 END TEST ubsan 00:01:19.311 ************************************ 00:01:19.570 09:39:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.570 09:39:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.570 09:39:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.570 09:39:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.570 09:39:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.570 09:39:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.570 09:39:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.570 09:39:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.570 09:39:13 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:19.570 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:19.570 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:20.139 Using 'verbs' RDMA provider 00:01:35.647 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:47.854 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:47.854 Creating mk/config.mk...done. 00:01:47.854 Creating mk/cc.flags.mk...done. 00:01:47.854 Type 'make' to build. 00:01:47.854 09:39:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:47.854 09:39:40 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:47.854 09:39:40 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:47.854 09:39:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.854 ************************************ 00:01:47.854 START TEST make 00:01:47.854 ************************************ 00:01:47.854 09:39:40 -- common/autotest_common.sh@1104 -- $ make -j10 00:01:47.854 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:47.854 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:47.854 meson setup builddir \ 00:01:47.854 -Dwith-libaio=enabled \ 00:01:47.854 -Dwith-liburing=enabled \ 00:01:47.854 -Dwith-libvfn=disabled \ 00:01:47.854 -Dwith-spdk=false && \ 00:01:47.854 meson compile -C builddir && \ 00:01:47.854 cd -) 00:01:47.854 make[1]: Nothing to be done for 'all'. 00:01:49.754 The Meson build system 00:01:49.754 Version: 1.3.1 00:01:49.754 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:49.754 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:49.754 Build type: native build 00:01:49.754 Project name: xnvme 00:01:49.754 Project version: 0.7.3 00:01:49.754 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.754 C linker for the host machine: cc ld.bfd 2.39-16 00:01:49.754 Host machine cpu family: x86_64 00:01:49.754 Host machine cpu: x86_64 00:01:49.754 Message: host_machine.system: linux 00:01:49.754 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:49.754 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:49.754 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:49.754 Run-time dependency threads found: YES 00:01:49.754 Has header "setupapi.h" : NO 00:01:49.754 Has header "linux/blkzoned.h" : YES 00:01:49.754 Has header "linux/blkzoned.h" : YES (cached) 00:01:49.754 Has header "libaio.h" : YES 00:01:49.754 Library aio found: YES 00:01:49.754 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.754 Run-time dependency liburing found: YES 2.2 00:01:49.754 Dependency libvfn skipped: feature with-libvfn disabled 00:01:49.754 Run-time dependency appleframeworks found: NO (tried framework) 00:01:49.754 Run-time dependency appleframeworks found: NO (tried framework) 00:01:49.754 Configuring xnvme_config.h using configuration 00:01:49.754 Configuring xnvme.spec using configuration 00:01:49.754 Run-time dependency bash-completion found: YES 2.11 00:01:49.754 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:49.754 Program cp found: YES (/usr/bin/cp) 00:01:49.754 Has header "winsock2.h" : NO 00:01:49.754 Has header "dbghelp.h" : NO 00:01:49.754 Library rpcrt4 found: NO 00:01:49.754 Library rt found: YES 00:01:49.754 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:49.754 Found CMake: /usr/bin/cmake (3.27.7) 00:01:49.754 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:01:49.754 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:01:49.754 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:01:49.754 Build targets in project: 32 00:01:49.754 00:01:49.754 xnvme 0.7.3 00:01:49.754 00:01:49.754 User defined options 00:01:49.754 with-libaio : enabled 00:01:49.755 with-liburing: enabled 00:01:49.755 with-libvfn : disabled 00:01:49.755 with-spdk : false 00:01:49.755 00:01:49.755 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.321 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:50.321 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:01:50.321 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:01:50.321 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:01:50.321 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:01:50.321 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:01:50.321 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:01:50.321 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:01:50.321 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:01:50.321 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:01:50.321 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:01:50.321 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:01:50.321 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:01:50.578 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:01:50.579 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:01:50.579 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:01:50.579 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:01:50.579 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:01:50.579 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:01:50.579 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:01:50.579 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:01:50.579 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:01:50.579 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:01:50.579 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:01:50.579 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:01:50.579 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:01:50.579 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:01:50.579 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:01:50.837 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:01:50.837 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:01:50.837 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:01:50.837 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:01:50.837 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:01:50.837 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:01:50.837 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:01:50.837 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:01:50.837 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:01:50.837 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:01:50.837 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:01:50.837 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:01:50.837 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:01:50.837 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:01:50.837 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:01:50.837 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:01:50.837 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:01:50.837 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:01:50.837 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:01:50.837 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:01:50.837 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:01:50.837 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:01:50.837 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:01:50.837 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:01:50.837 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:01:50.837 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:01:50.837 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:01:50.837 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:01:50.837 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:01:51.095 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:01:51.095 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:01:51.095 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:01:51.095 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:01:51.095 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:01:51.095 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:01:51.095 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:01:51.095 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:01:51.095 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:01:51.095 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:01:51.095 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:01:51.095 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:01:51.095 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:01:51.095 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:01:51.095 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:01:51.095 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:01:51.353 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:01:51.353 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:01:51.353 [75/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:01:51.353 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:01:51.353 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:01:51.353 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:01:51.353 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:01:51.353 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:01:51.353 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:01:51.353 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:01:51.353 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:01:51.353 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:01:51.353 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:01:51.353 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:01:51.353 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:01:51.353 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:01:51.611 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:01:51.611 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:01:51.611 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:01:51.611 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:01:51.611 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:01:51.611 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:01:51.611 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:01:51.611 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:01:51.611 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:01:51.611 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:01:51.611 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:01:51.611 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:01:51.611 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:01:51.611 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:01:51.611 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:01:51.611 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:01:51.611 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:01:51.611 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:01:51.611 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:01:51.611 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:01:51.611 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:01:51.611 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:01:51.611 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:01:51.611 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:01:51.611 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:01:51.611 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:01:51.611 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:01:51.611 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:01:51.611 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:01:51.611 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:01:51.611 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:01:51.611 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:01:51.869 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:01:51.869 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:01:51.869 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:01:51.869 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:01:51.869 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:01:51.869 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:01:51.869 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:01:51.869 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:01:51.869 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:01:51.869 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:01:51.869 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:01:51.869 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:01:51.869 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:01:51.869 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:01:51.869 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:01:51.869 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:01:51.869 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:01:51.869 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:01:52.128 [139/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:01:52.128 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:01:52.128 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:01:52.128 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:01:52.128 [143/203] Linking target lib/libxnvme.so 00:01:52.128 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:01:52.128 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:01:52.128 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:01:52.128 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:01:52.128 [148/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:01:52.128 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:01:52.128 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:01:52.128 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:01:52.387 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:01:52.387 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:01:52.387 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:01:52.387 [155/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:01:52.387 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:01:52.387 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:01:52.387 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:01:52.387 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:01:52.387 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:01:52.387 [161/203] Compiling C object tools/lblk.p/lblk.c.o 00:01:52.387 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:01:52.387 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:01:52.646 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:01:52.646 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:01:52.646 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:01:52.646 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:01:52.646 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:01:52.646 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:01:52.646 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:01:52.646 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:01:52.646 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:01:52.646 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:01:52.646 [174/203] Linking static target lib/libxnvme.a 00:01:52.904 [175/203] Linking target tests/xnvme_tests_cli 00:01:52.904 [176/203] Linking target tests/xnvme_tests_znd_state 00:01:52.904 [177/203] Linking target tests/xnvme_tests_ioworker 00:01:52.905 [178/203] Linking target tests/xnvme_tests_buf 00:01:52.905 [179/203] Linking target tests/xnvme_tests_lblk 00:01:52.905 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:01:52.905 [181/203] Linking target tests/xnvme_tests_async_intf 00:01:52.905 [182/203] Linking target tests/xnvme_tests_enum 00:01:52.905 [183/203] Linking target tests/xnvme_tests_scc 00:01:52.905 [184/203] Linking target tests/xnvme_tests_xnvme_file 00:01:52.905 [185/203] Linking target tests/xnvme_tests_znd_append 00:01:52.905 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:01:52.905 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:01:52.905 [188/203] Linking target tests/xnvme_tests_kvs 00:01:52.905 [189/203] Linking target tools/xdd 00:01:52.905 [190/203] Linking target examples/xnvme_hello 00:01:52.905 [191/203] Linking target tests/xnvme_tests_map 00:01:52.905 [192/203] Linking target tools/kvs 00:01:52.905 [193/203] Linking target tools/xnvme 00:01:52.905 [194/203] Linking target tools/zoned 00:01:52.905 [195/203] Linking target examples/xnvme_dev 00:01:52.905 [196/203] Linking target tools/lblk 00:01:52.905 [197/203] Linking target examples/xnvme_enum 00:01:52.905 [198/203] Linking target tools/xnvme_file 00:01:52.905 [199/203] Linking target examples/xnvme_single_async 00:01:52.905 [200/203] Linking target examples/zoned_io_async 00:01:52.905 [201/203] Linking target examples/xnvme_io_async 00:01:52.905 [202/203] Linking target examples/zoned_io_sync 00:01:52.905 [203/203] Linking target examples/xnvme_single_sync 00:01:52.905 INFO: autodetecting backend as ninja 00:01:52.905 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:52.905 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:01:59.468 The Meson build system 00:01:59.468 Version: 1.3.1 00:01:59.468 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:59.468 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:59.468 Build type: native build 00:01:59.468 Program cat found: YES (/usr/bin/cat) 00:01:59.468 Project name: DPDK 00:01:59.468 Project version: 23.11.0 00:01:59.468 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.468 C linker for the host machine: cc ld.bfd 2.39-16 00:01:59.468 Host machine cpu family: x86_64 00:01:59.468 Host machine cpu: x86_64 00:01:59.468 Message: ## Building in Developer Mode ## 00:01:59.468 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.468 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.468 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.468 Program python3 found: YES (/usr/bin/python3) 00:01:59.468 Program cat found: YES (/usr/bin/cat) 00:01:59.468 Compiler for C supports arguments -march=native: YES 00:01:59.468 Checking for size of "void *" : 8 00:01:59.468 Checking for size of "void *" : 8 (cached) 00:01:59.468 Library m found: YES 00:01:59.468 Library numa found: YES 00:01:59.468 Has header "numaif.h" : YES 00:01:59.468 Library fdt found: NO 00:01:59.468 Library execinfo found: NO 00:01:59.468 Has header "execinfo.h" : YES 00:01:59.468 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.468 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.468 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.468 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.468 Run-time dependency openssl found: YES 3.0.9 00:01:59.468 Run-time dependency libpcap found: YES 1.10.4 00:01:59.468 Has header "pcap.h" with dependency libpcap: YES 00:01:59.468 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.468 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.468 Compiler for C supports arguments -Wformat: YES 00:01:59.468 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.468 Compiler for C supports arguments -Wformat-security: NO 00:01:59.468 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.468 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.468 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.468 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.468 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.468 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.468 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.468 Compiler for C supports arguments -Wundef: YES 00:01:59.468 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.468 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.468 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.468 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.468 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.468 Program objdump found: YES (/usr/bin/objdump) 00:01:59.468 Compiler for C supports arguments -mavx512f: YES 00:01:59.468 Checking if "AVX512 checking" compiles: YES 00:01:59.468 Fetching value of define "__SSE4_2__" : 1 00:01:59.468 Fetching value of define "__AES__" : 1 00:01:59.468 Fetching value of define "__AVX__" : 1 00:01:59.468 Fetching value of define "__AVX2__" : 1 00:01:59.468 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.468 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.468 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.468 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.468 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.468 Fetching value of define "__PCLMUL__" : 1 00:01:59.468 Fetching value of define "__RDRND__" : 1 00:01:59.468 Fetching value of define "__RDSEED__" : 1 00:01:59.468 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.468 Fetching value of define "__znver1__" : (undefined) 00:01:59.468 Fetching value of define "__znver2__" : (undefined) 00:01:59.468 Fetching value of define "__znver3__" : (undefined) 00:01:59.468 Fetching value of define "__znver4__" : (undefined) 00:01:59.468 Library asan found: YES 00:01:59.468 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.468 Message: lib/log: Defining dependency "log" 00:01:59.468 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.468 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.468 Library rt found: YES 00:01:59.468 Checking for function "getentropy" : NO 00:01:59.468 Message: lib/eal: Defining dependency "eal" 00:01:59.468 Message: lib/ring: Defining dependency "ring" 00:01:59.468 Message: lib/rcu: Defining dependency "rcu" 00:01:59.468 Message: lib/mempool: Defining dependency "mempool" 00:01:59.468 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.468 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.468 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.468 Compiler for C supports arguments -mpclmul: YES 00:01:59.468 Compiler for C supports arguments -maes: YES 00:01:59.468 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.468 Compiler for C supports arguments -mavx512bw: YES 00:01:59.468 Compiler for C supports arguments -mavx512dq: YES 00:01:59.468 Compiler for C supports arguments -mavx512vl: YES 00:01:59.468 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.468 Compiler for C supports arguments -mavx2: YES 00:01:59.468 Compiler for C supports arguments -mavx: YES 00:01:59.468 Message: lib/net: Defining dependency "net" 00:01:59.468 Message: lib/meter: Defining dependency "meter" 00:01:59.468 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.468 Message: lib/pci: Defining dependency "pci" 00:01:59.468 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.468 Message: lib/hash: Defining dependency "hash" 00:01:59.468 Message: lib/timer: Defining dependency "timer" 00:01:59.468 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.468 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.468 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.468 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.468 Message: lib/power: Defining dependency "power" 00:01:59.468 Message: lib/reorder: Defining dependency "reorder" 00:01:59.468 Message: lib/security: Defining dependency "security" 00:01:59.468 Has header "linux/userfaultfd.h" : YES 00:01:59.468 Has header "linux/vduse.h" : YES 00:01:59.468 Message: lib/vhost: Defining dependency "vhost" 00:01:59.468 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.468 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.468 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.468 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.468 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.468 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.468 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.468 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.468 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.468 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.468 Program doxygen found: YES (/usr/bin/doxygen) 00:01:59.469 Configuring doxy-api-html.conf using configuration 00:01:59.469 Configuring doxy-api-man.conf using configuration 00:01:59.469 Program mandb found: YES (/usr/bin/mandb) 00:01:59.469 Program sphinx-build found: NO 00:01:59.469 Configuring rte_build_config.h using configuration 00:01:59.469 Message: 00:01:59.469 ================= 00:01:59.469 Applications Enabled 00:01:59.469 ================= 00:01:59.469 00:01:59.469 apps: 00:01:59.469 00:01:59.469 00:01:59.469 Message: 00:01:59.469 ================= 00:01:59.469 Libraries Enabled 00:01:59.469 ================= 00:01:59.469 00:01:59.469 libs: 00:01:59.469 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.469 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.469 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.469 00:01:59.469 Message: 00:01:59.469 =============== 00:01:59.469 Drivers Enabled 00:01:59.469 =============== 00:01:59.469 00:01:59.469 common: 00:01:59.469 00:01:59.469 bus: 00:01:59.469 pci, vdev, 00:01:59.469 mempool: 00:01:59.469 ring, 00:01:59.469 dma: 00:01:59.469 00:01:59.469 net: 00:01:59.469 00:01:59.469 crypto: 00:01:59.469 00:01:59.469 compress: 00:01:59.469 00:01:59.469 vdpa: 00:01:59.469 00:01:59.469 00:01:59.469 Message: 00:01:59.469 ================= 00:01:59.469 Content Skipped 00:01:59.469 ================= 00:01:59.469 00:01:59.469 apps: 00:01:59.469 dumpcap: explicitly disabled via build config 00:01:59.469 graph: explicitly disabled via build config 00:01:59.469 pdump: explicitly disabled via build config 00:01:59.469 proc-info: explicitly disabled via build config 00:01:59.469 test-acl: explicitly disabled via build config 00:01:59.469 test-bbdev: explicitly disabled via build config 00:01:59.469 test-cmdline: explicitly disabled via build config 00:01:59.469 test-compress-perf: explicitly disabled via build config 00:01:59.469 test-crypto-perf: explicitly disabled via build config 00:01:59.469 test-dma-perf: explicitly disabled via build config 00:01:59.469 test-eventdev: explicitly disabled via build config 00:01:59.469 test-fib: explicitly disabled via build config 00:01:59.469 test-flow-perf: explicitly disabled via build config 00:01:59.469 test-gpudev: explicitly disabled via build config 00:01:59.469 test-mldev: explicitly disabled via build config 00:01:59.469 test-pipeline: explicitly disabled via build config 00:01:59.469 test-pmd: explicitly disabled via build config 00:01:59.469 test-regex: explicitly disabled via build config 00:01:59.469 test-sad: explicitly disabled via build config 00:01:59.469 test-security-perf: explicitly disabled via build config 00:01:59.469 00:01:59.469 libs: 00:01:59.469 metrics: explicitly disabled via build config 00:01:59.469 acl: explicitly disabled via build config 00:01:59.469 bbdev: explicitly disabled via build config 00:01:59.469 bitratestats: explicitly disabled via build config 00:01:59.469 bpf: explicitly disabled via build config 00:01:59.469 cfgfile: explicitly disabled via build config 00:01:59.469 distributor: explicitly disabled via build config 00:01:59.469 efd: explicitly disabled via build config 00:01:59.469 eventdev: explicitly disabled via build config 00:01:59.469 dispatcher: explicitly disabled via build config 00:01:59.469 gpudev: explicitly disabled via build config 00:01:59.469 gro: explicitly disabled via build config 00:01:59.469 gso: explicitly disabled via build config 00:01:59.469 ip_frag: explicitly disabled via build config 00:01:59.469 jobstats: explicitly disabled via build config 00:01:59.469 latencystats: explicitly disabled via build config 00:01:59.469 lpm: explicitly disabled via build config 00:01:59.469 member: explicitly disabled via build config 00:01:59.469 pcapng: explicitly disabled via build config 00:01:59.469 rawdev: explicitly disabled via build config 00:01:59.469 regexdev: explicitly disabled via build config 00:01:59.469 mldev: explicitly disabled via build config 00:01:59.469 rib: explicitly disabled via build config 00:01:59.469 sched: explicitly disabled via build config 00:01:59.469 stack: explicitly disabled via build config 00:01:59.469 ipsec: explicitly disabled via build config 00:01:59.469 pdcp: explicitly disabled via build config 00:01:59.469 fib: explicitly disabled via build config 00:01:59.469 port: explicitly disabled via build config 00:01:59.469 pdump: explicitly disabled via build config 00:01:59.469 table: explicitly disabled via build config 00:01:59.469 pipeline: explicitly disabled via build config 00:01:59.469 graph: explicitly disabled via build config 00:01:59.469 node: explicitly disabled via build config 00:01:59.469 00:01:59.469 drivers: 00:01:59.469 common/cpt: not in enabled drivers build config 00:01:59.469 common/dpaax: not in enabled drivers build config 00:01:59.469 common/iavf: not in enabled drivers build config 00:01:59.469 common/idpf: not in enabled drivers build config 00:01:59.469 common/mvep: not in enabled drivers build config 00:01:59.469 common/octeontx: not in enabled drivers build config 00:01:59.469 bus/auxiliary: not in enabled drivers build config 00:01:59.469 bus/cdx: not in enabled drivers build config 00:01:59.469 bus/dpaa: not in enabled drivers build config 00:01:59.469 bus/fslmc: not in enabled drivers build config 00:01:59.469 bus/ifpga: not in enabled drivers build config 00:01:59.469 bus/platform: not in enabled drivers build config 00:01:59.469 bus/vmbus: not in enabled drivers build config 00:01:59.469 common/cnxk: not in enabled drivers build config 00:01:59.469 common/mlx5: not in enabled drivers build config 00:01:59.469 common/nfp: not in enabled drivers build config 00:01:59.469 common/qat: not in enabled drivers build config 00:01:59.469 common/sfc_efx: not in enabled drivers build config 00:01:59.469 mempool/bucket: not in enabled drivers build config 00:01:59.469 mempool/cnxk: not in enabled drivers build config 00:01:59.469 mempool/dpaa: not in enabled drivers build config 00:01:59.469 mempool/dpaa2: not in enabled drivers build config 00:01:59.469 mempool/octeontx: not in enabled drivers build config 00:01:59.469 mempool/stack: not in enabled drivers build config 00:01:59.469 dma/cnxk: not in enabled drivers build config 00:01:59.469 dma/dpaa: not in enabled drivers build config 00:01:59.469 dma/dpaa2: not in enabled drivers build config 00:01:59.469 dma/hisilicon: not in enabled drivers build config 00:01:59.469 dma/idxd: not in enabled drivers build config 00:01:59.469 dma/ioat: not in enabled drivers build config 00:01:59.469 dma/skeleton: not in enabled drivers build config 00:01:59.469 net/af_packet: not in enabled drivers build config 00:01:59.469 net/af_xdp: not in enabled drivers build config 00:01:59.469 net/ark: not in enabled drivers build config 00:01:59.469 net/atlantic: not in enabled drivers build config 00:01:59.469 net/avp: not in enabled drivers build config 00:01:59.469 net/axgbe: not in enabled drivers build config 00:01:59.469 net/bnx2x: not in enabled drivers build config 00:01:59.469 net/bnxt: not in enabled drivers build config 00:01:59.469 net/bonding: not in enabled drivers build config 00:01:59.469 net/cnxk: not in enabled drivers build config 00:01:59.469 net/cpfl: not in enabled drivers build config 00:01:59.469 net/cxgbe: not in enabled drivers build config 00:01:59.469 net/dpaa: not in enabled drivers build config 00:01:59.469 net/dpaa2: not in enabled drivers build config 00:01:59.469 net/e1000: not in enabled drivers build config 00:01:59.469 net/ena: not in enabled drivers build config 00:01:59.469 net/enetc: not in enabled drivers build config 00:01:59.469 net/enetfec: not in enabled drivers build config 00:01:59.469 net/enic: not in enabled drivers build config 00:01:59.469 net/failsafe: not in enabled drivers build config 00:01:59.469 net/fm10k: not in enabled drivers build config 00:01:59.469 net/gve: not in enabled drivers build config 00:01:59.469 net/hinic: not in enabled drivers build config 00:01:59.469 net/hns3: not in enabled drivers build config 00:01:59.469 net/i40e: not in enabled drivers build config 00:01:59.469 net/iavf: not in enabled drivers build config 00:01:59.469 net/ice: not in enabled drivers build config 00:01:59.469 net/idpf: not in enabled drivers build config 00:01:59.469 net/igc: not in enabled drivers build config 00:01:59.469 net/ionic: not in enabled drivers build config 00:01:59.469 net/ipn3ke: not in enabled drivers build config 00:01:59.469 net/ixgbe: not in enabled drivers build config 00:01:59.469 net/mana: not in enabled drivers build config 00:01:59.469 net/memif: not in enabled drivers build config 00:01:59.469 net/mlx4: not in enabled drivers build config 00:01:59.469 net/mlx5: not in enabled drivers build config 00:01:59.469 net/mvneta: not in enabled drivers build config 00:01:59.469 net/mvpp2: not in enabled drivers build config 00:01:59.469 net/netvsc: not in enabled drivers build config 00:01:59.469 net/nfb: not in enabled drivers build config 00:01:59.469 net/nfp: not in enabled drivers build config 00:01:59.469 net/ngbe: not in enabled drivers build config 00:01:59.469 net/null: not in enabled drivers build config 00:01:59.469 net/octeontx: not in enabled drivers build config 00:01:59.469 net/octeon_ep: not in enabled drivers build config 00:01:59.469 net/pcap: not in enabled drivers build config 00:01:59.469 net/pfe: not in enabled drivers build config 00:01:59.469 net/qede: not in enabled drivers build config 00:01:59.469 net/ring: not in enabled drivers build config 00:01:59.469 net/sfc: not in enabled drivers build config 00:01:59.469 net/softnic: not in enabled drivers build config 00:01:59.469 net/tap: not in enabled drivers build config 00:01:59.469 net/thunderx: not in enabled drivers build config 00:01:59.469 net/txgbe: not in enabled drivers build config 00:01:59.469 net/vdev_netvsc: not in enabled drivers build config 00:01:59.469 net/vhost: not in enabled drivers build config 00:01:59.469 net/virtio: not in enabled drivers build config 00:01:59.469 net/vmxnet3: not in enabled drivers build config 00:01:59.469 raw/*: missing internal dependency, "rawdev" 00:01:59.469 crypto/armv8: not in enabled drivers build config 00:01:59.469 crypto/bcmfs: not in enabled drivers build config 00:01:59.469 crypto/caam_jr: not in enabled drivers build config 00:01:59.470 crypto/ccp: not in enabled drivers build config 00:01:59.470 crypto/cnxk: not in enabled drivers build config 00:01:59.470 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.470 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.470 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.470 crypto/mlx5: not in enabled drivers build config 00:01:59.470 crypto/mvsam: not in enabled drivers build config 00:01:59.470 crypto/nitrox: not in enabled drivers build config 00:01:59.470 crypto/null: not in enabled drivers build config 00:01:59.470 crypto/octeontx: not in enabled drivers build config 00:01:59.470 crypto/openssl: not in enabled drivers build config 00:01:59.470 crypto/scheduler: not in enabled drivers build config 00:01:59.470 crypto/uadk: not in enabled drivers build config 00:01:59.470 crypto/virtio: not in enabled drivers build config 00:01:59.470 compress/isal: not in enabled drivers build config 00:01:59.470 compress/mlx5: not in enabled drivers build config 00:01:59.470 compress/octeontx: not in enabled drivers build config 00:01:59.470 compress/zlib: not in enabled drivers build config 00:01:59.470 regex/*: missing internal dependency, "regexdev" 00:01:59.470 ml/*: missing internal dependency, "mldev" 00:01:59.470 vdpa/ifc: not in enabled drivers build config 00:01:59.470 vdpa/mlx5: not in enabled drivers build config 00:01:59.470 vdpa/nfp: not in enabled drivers build config 00:01:59.470 vdpa/sfc: not in enabled drivers build config 00:01:59.470 event/*: missing internal dependency, "eventdev" 00:01:59.470 baseband/*: missing internal dependency, "bbdev" 00:01:59.470 gpu/*: missing internal dependency, "gpudev" 00:01:59.470 00:01:59.470 00:01:59.470 Build targets in project: 85 00:01:59.470 00:01:59.470 DPDK 23.11.0 00:01:59.470 00:01:59.470 User defined options 00:01:59.470 buildtype : debug 00:01:59.470 default_library : shared 00:01:59.470 libdir : lib 00:01:59.470 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.470 b_sanitize : address 00:01:59.470 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:59.470 c_link_args : 00:01:59.470 cpu_instruction_set: native 00:01:59.470 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:59.470 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:59.470 enable_docs : false 00:01:59.470 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.470 enable_kmods : false 00:01:59.470 tests : false 00:01:59.470 00:01:59.470 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.036 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:00.036 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.036 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.036 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.036 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.036 [5/265] Linking static target lib/librte_kvargs.a 00:02:00.036 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.036 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.036 [8/265] Linking static target lib/librte_log.a 00:02:00.036 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.295 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.553 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.812 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.812 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.812 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.812 [15/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.071 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.071 [17/265] Linking target lib/librte_log.so.24.0 00:02:01.071 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.071 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.071 [20/265] Linking static target lib/librte_telemetry.a 00:02:01.071 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.330 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.330 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:01.330 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.330 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:01.589 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.589 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:01.849 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.849 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.849 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.849 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.849 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:02.111 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:02.111 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.111 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.369 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:02.369 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.369 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.369 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.369 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.369 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.628 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.628 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.628 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.887 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.887 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.145 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.145 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.145 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.145 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.404 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.404 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.404 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.404 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.662 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.662 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.920 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.920 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.920 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.920 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.178 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.178 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:04.178 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.178 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.178 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.436 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.436 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.693 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:04.952 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.952 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.952 [71/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.952 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.952 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:04.952 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.952 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.952 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.952 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.211 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.469 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.469 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.469 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.728 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.728 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.728 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:05.728 [85/265] Linking static target lib/librte_ring.a 00:02:05.987 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:05.987 [87/265] Linking static target lib/librte_eal.a 00:02:06.246 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.246 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.246 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.246 [91/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.246 [92/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.504 [93/265] Linking static target lib/librte_rcu.a 00:02:06.504 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.504 [95/265] Linking static target lib/librte_mempool.a 00:02:06.762 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.762 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.021 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.021 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.021 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.021 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.279 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.846 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.846 [104/265] Linking static target lib/librte_mbuf.a 00:02:07.846 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.846 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.846 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.846 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.846 [109/265] Linking static target lib/librte_net.a 00:02:07.846 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.104 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.104 [112/265] Linking static target lib/librte_meter.a 00:02:08.104 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.104 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.104 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.364 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.364 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.364 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.624 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.881 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.881 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.139 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.397 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.397 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.397 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.397 [126/265] Linking static target lib/librte_pci.a 00:02:09.397 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.656 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.656 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.656 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.656 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.656 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.656 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.656 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.656 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.914 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.914 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.914 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.914 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.914 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.914 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.914 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:10.172 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.172 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.172 [145/265] Linking static target lib/librte_cmdline.a 00:02:10.172 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.739 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.739 [148/265] Linking static target lib/librte_timer.a 00:02:10.739 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.739 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.998 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.998 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.256 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.256 [154/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.515 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.515 [156/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.515 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.515 [158/265] Linking static target lib/librte_compressdev.a 00:02:11.773 [159/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.773 [160/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.773 [161/265] Linking static target lib/librte_ethdev.a 00:02:11.773 [162/265] Linking static target lib/librte_hash.a 00:02:11.773 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:12.031 [164/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.031 [165/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:12.031 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.031 [167/265] Linking static target lib/librte_dmadev.a 00:02:12.031 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.032 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.032 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.599 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.599 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.599 [173/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.599 [174/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.857 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.857 [176/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.857 [177/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.857 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.857 [179/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.857 [180/265] Linking static target lib/librte_cryptodev.a 00:02:13.116 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.374 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.374 [183/265] Linking static target lib/librte_power.a 00:02:13.632 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.632 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.632 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.632 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.632 [188/265] Linking static target lib/librte_security.a 00:02:13.891 [189/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.891 [190/265] Linking static target lib/librte_reorder.a 00:02:14.149 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.149 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.408 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.408 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.408 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.666 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:14.925 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.925 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.925 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.925 [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.925 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.925 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.492 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.493 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.493 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.493 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.493 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.493 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.751 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.751 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.751 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.751 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.751 [213/265] Linking static target drivers/librte_bus_vdev.a 00:02:15.751 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.751 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.751 [216/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.751 [217/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.751 [218/265] Linking static target drivers/librte_bus_pci.a 00:02:16.009 [219/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.009 [220/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.009 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.009 [222/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.009 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:16.268 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.835 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.835 [226/265] Linking target lib/librte_eal.so.24.0 00:02:17.093 [227/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:17.093 [228/265] Linking target lib/librte_timer.so.24.0 00:02:17.093 [229/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.093 [230/265] Linking target lib/librte_meter.so.24.0 00:02:17.093 [231/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:17.093 [232/265] Linking target lib/librte_dmadev.so.24.0 00:02:17.093 [233/265] Linking target lib/librte_ring.so.24.0 00:02:17.093 [234/265] Linking target lib/librte_pci.so.24.0 00:02:17.352 [235/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:17.352 [236/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:17.352 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:17.352 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:17.352 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:17.352 [240/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:17.352 [241/265] Linking target lib/librte_mempool.so.24.0 00:02:17.352 [242/265] Linking target lib/librte_rcu.so.24.0 00:02:17.612 [243/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:17.612 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:17.612 [245/265] Linking target lib/librte_mbuf.so.24.0 00:02:17.612 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:17.870 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:17.870 [248/265] Linking target lib/librte_net.so.24.0 00:02:17.870 [249/265] Linking target lib/librte_reorder.so.24.0 00:02:17.870 [250/265] Linking target lib/librte_compressdev.so.24.0 00:02:17.870 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:02:18.130 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:18.130 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:18.130 [254/265] Linking target lib/librte_hash.so.24.0 00:02:18.130 [255/265] Linking target lib/librte_cmdline.so.24.0 00:02:18.130 [256/265] Linking target lib/librte_security.so.24.0 00:02:18.130 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:18.698 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.698 [259/265] Linking target lib/librte_ethdev.so.24.0 00:02:18.957 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:18.957 [261/265] Linking target lib/librte_power.so.24.0 00:02:20.858 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:20.858 [263/265] Linking static target lib/librte_vhost.a 00:02:22.234 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.234 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:22.234 INFO: autodetecting backend as ninja 00:02:22.234 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:23.609 CC lib/ut_mock/mock.o 00:02:23.609 CC lib/ut/ut.o 00:02:23.609 CC lib/log/log.o 00:02:23.609 CC lib/log/log_flags.o 00:02:23.609 CC lib/log/log_deprecated.o 00:02:23.609 LIB libspdk_ut_mock.a 00:02:23.609 LIB libspdk_ut.a 00:02:23.609 SO libspdk_ut_mock.so.5.0 00:02:23.609 LIB libspdk_log.a 00:02:23.609 SO libspdk_ut.so.1.0 00:02:23.609 SO libspdk_log.so.6.1 00:02:23.609 SYMLINK libspdk_ut_mock.so 00:02:23.868 SYMLINK libspdk_ut.so 00:02:23.868 SYMLINK libspdk_log.so 00:02:23.868 CC lib/dma/dma.o 00:02:23.868 CC lib/util/bit_array.o 00:02:23.868 CC lib/util/base64.o 00:02:23.868 CC lib/util/cpuset.o 00:02:23.868 CC lib/ioat/ioat.o 00:02:23.868 CC lib/util/crc16.o 00:02:23.868 CC lib/util/crc32.o 00:02:23.868 CC lib/util/crc32c.o 00:02:23.868 CXX lib/trace_parser/trace.o 00:02:24.126 CC lib/vfio_user/host/vfio_user_pci.o 00:02:24.126 CC lib/vfio_user/host/vfio_user.o 00:02:24.126 CC lib/util/crc32_ieee.o 00:02:24.126 CC lib/util/crc64.o 00:02:24.126 CC lib/util/dif.o 00:02:24.126 LIB libspdk_dma.a 00:02:24.126 CC lib/util/fd.o 00:02:24.126 SO libspdk_dma.so.3.0 00:02:24.126 CC lib/util/file.o 00:02:24.385 CC lib/util/hexlify.o 00:02:24.385 CC lib/util/iov.o 00:02:24.385 SYMLINK libspdk_dma.so 00:02:24.385 CC lib/util/math.o 00:02:24.385 CC lib/util/pipe.o 00:02:24.385 LIB libspdk_ioat.a 00:02:24.385 CC lib/util/strerror_tls.o 00:02:24.385 SO libspdk_ioat.so.6.0 00:02:24.385 LIB libspdk_vfio_user.a 00:02:24.385 CC lib/util/string.o 00:02:24.385 CC lib/util/uuid.o 00:02:24.385 SO libspdk_vfio_user.so.4.0 00:02:24.385 SYMLINK libspdk_ioat.so 00:02:24.385 CC lib/util/fd_group.o 00:02:24.385 CC lib/util/xor.o 00:02:24.385 CC lib/util/zipf.o 00:02:24.644 SYMLINK libspdk_vfio_user.so 00:02:24.902 LIB libspdk_util.a 00:02:24.902 SO libspdk_util.so.8.0 00:02:25.160 LIB libspdk_trace_parser.a 00:02:25.160 SYMLINK libspdk_util.so 00:02:25.160 SO libspdk_trace_parser.so.4.0 00:02:25.160 CC lib/json/json_parse.o 00:02:25.160 SYMLINK libspdk_trace_parser.so 00:02:25.160 CC lib/json/json_util.o 00:02:25.160 CC lib/rdma/common.o 00:02:25.160 CC lib/json/json_write.o 00:02:25.160 CC lib/env_dpdk/env.o 00:02:25.160 CC lib/rdma/rdma_verbs.o 00:02:25.160 CC lib/vmd/vmd.o 00:02:25.160 CC lib/conf/conf.o 00:02:25.160 CC lib/idxd/idxd.o 00:02:25.160 CC lib/env_dpdk/memory.o 00:02:25.418 CC lib/vmd/led.o 00:02:25.418 LIB libspdk_conf.a 00:02:25.418 CC lib/idxd/idxd_user.o 00:02:25.418 CC lib/idxd/idxd_kernel.o 00:02:25.676 SO libspdk_conf.so.5.0 00:02:25.676 LIB libspdk_rdma.a 00:02:25.676 LIB libspdk_json.a 00:02:25.676 SO libspdk_rdma.so.5.0 00:02:25.676 SYMLINK libspdk_conf.so 00:02:25.676 CC lib/env_dpdk/pci.o 00:02:25.676 SO libspdk_json.so.5.1 00:02:25.676 CC lib/env_dpdk/init.o 00:02:25.676 SYMLINK libspdk_rdma.so 00:02:25.676 CC lib/env_dpdk/threads.o 00:02:25.676 CC lib/env_dpdk/pci_ioat.o 00:02:25.676 SYMLINK libspdk_json.so 00:02:25.676 CC lib/env_dpdk/pci_virtio.o 00:02:25.935 CC lib/env_dpdk/pci_vmd.o 00:02:25.935 CC lib/env_dpdk/pci_idxd.o 00:02:25.935 CC lib/env_dpdk/pci_event.o 00:02:25.935 CC lib/env_dpdk/sigbus_handler.o 00:02:25.935 CC lib/env_dpdk/pci_dpdk.o 00:02:25.935 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:25.935 LIB libspdk_idxd.a 00:02:25.935 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:25.935 SO libspdk_idxd.so.11.0 00:02:26.193 SYMLINK libspdk_idxd.so 00:02:26.193 LIB libspdk_vmd.a 00:02:26.193 CC lib/jsonrpc/jsonrpc_server.o 00:02:26.193 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:26.193 CC lib/jsonrpc/jsonrpc_client.o 00:02:26.193 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:26.193 SO libspdk_vmd.so.5.0 00:02:26.193 SYMLINK libspdk_vmd.so 00:02:26.451 LIB libspdk_jsonrpc.a 00:02:26.451 SO libspdk_jsonrpc.so.5.1 00:02:26.709 SYMLINK libspdk_jsonrpc.so 00:02:26.709 CC lib/rpc/rpc.o 00:02:27.277 LIB libspdk_rpc.a 00:02:27.277 LIB libspdk_env_dpdk.a 00:02:27.277 SO libspdk_rpc.so.5.0 00:02:27.277 SYMLINK libspdk_rpc.so 00:02:27.277 SO libspdk_env_dpdk.so.13.0 00:02:27.277 CC lib/notify/notify.o 00:02:27.277 CC lib/notify/notify_rpc.o 00:02:27.277 CC lib/sock/sock.o 00:02:27.277 CC lib/trace/trace.o 00:02:27.277 CC lib/trace/trace_flags.o 00:02:27.277 CC lib/sock/sock_rpc.o 00:02:27.277 CC lib/trace/trace_rpc.o 00:02:27.536 SYMLINK libspdk_env_dpdk.so 00:02:27.536 LIB libspdk_notify.a 00:02:27.536 SO libspdk_notify.so.5.0 00:02:27.536 LIB libspdk_trace.a 00:02:27.835 SYMLINK libspdk_notify.so 00:02:27.835 SO libspdk_trace.so.9.0 00:02:27.835 SYMLINK libspdk_trace.so 00:02:27.835 LIB libspdk_sock.a 00:02:27.835 SO libspdk_sock.so.8.0 00:02:28.173 CC lib/thread/thread.o 00:02:28.173 CC lib/thread/iobuf.o 00:02:28.173 SYMLINK libspdk_sock.so 00:02:28.173 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:28.173 CC lib/nvme/nvme_ctrlr.o 00:02:28.173 CC lib/nvme/nvme_fabric.o 00:02:28.173 CC lib/nvme/nvme_ns_cmd.o 00:02:28.173 CC lib/nvme/nvme_ns.o 00:02:28.173 CC lib/nvme/nvme_pcie_common.o 00:02:28.173 CC lib/nvme/nvme_pcie.o 00:02:28.173 CC lib/nvme/nvme_qpair.o 00:02:28.444 CC lib/nvme/nvme.o 00:02:29.010 CC lib/nvme/nvme_quirks.o 00:02:29.011 CC lib/nvme/nvme_transport.o 00:02:29.011 CC lib/nvme/nvme_discovery.o 00:02:29.268 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:29.268 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:29.268 CC lib/nvme/nvme_tcp.o 00:02:29.526 CC lib/nvme/nvme_opal.o 00:02:29.526 CC lib/nvme/nvme_io_msg.o 00:02:29.526 CC lib/nvme/nvme_poll_group.o 00:02:29.785 CC lib/nvme/nvme_zns.o 00:02:29.785 CC lib/nvme/nvme_cuse.o 00:02:29.785 CC lib/nvme/nvme_vfio_user.o 00:02:29.785 LIB libspdk_thread.a 00:02:29.785 CC lib/nvme/nvme_rdma.o 00:02:30.044 SO libspdk_thread.so.9.0 00:02:30.044 SYMLINK libspdk_thread.so 00:02:30.044 CC lib/accel/accel.o 00:02:30.044 CC lib/blob/blobstore.o 00:02:30.303 CC lib/blob/request.o 00:02:30.303 CC lib/blob/zeroes.o 00:02:30.303 CC lib/accel/accel_rpc.o 00:02:30.562 CC lib/blob/blob_bs_dev.o 00:02:30.562 CC lib/accel/accel_sw.o 00:02:30.821 CC lib/init/json_config.o 00:02:30.821 CC lib/init/subsystem.o 00:02:30.821 CC lib/virtio/virtio.o 00:02:30.821 CC lib/virtio/virtio_vhost_user.o 00:02:30.821 CC lib/init/subsystem_rpc.o 00:02:30.821 CC lib/init/rpc.o 00:02:30.821 CC lib/virtio/virtio_vfio_user.o 00:02:31.080 CC lib/virtio/virtio_pci.o 00:02:31.080 LIB libspdk_init.a 00:02:31.080 SO libspdk_init.so.4.0 00:02:31.080 SYMLINK libspdk_init.so 00:02:31.339 LIB libspdk_virtio.a 00:02:31.339 CC lib/event/app.o 00:02:31.339 CC lib/event/reactor.o 00:02:31.339 CC lib/event/scheduler_static.o 00:02:31.339 CC lib/event/log_rpc.o 00:02:31.339 CC lib/event/app_rpc.o 00:02:31.339 SO libspdk_virtio.so.6.0 00:02:31.598 LIB libspdk_accel.a 00:02:31.598 SYMLINK libspdk_virtio.so 00:02:31.598 SO libspdk_accel.so.14.0 00:02:31.598 LIB libspdk_nvme.a 00:02:31.598 SYMLINK libspdk_accel.so 00:02:31.856 SO libspdk_nvme.so.12.0 00:02:31.856 CC lib/bdev/bdev.o 00:02:31.856 CC lib/bdev/bdev_zone.o 00:02:31.856 CC lib/bdev/bdev_rpc.o 00:02:31.856 CC lib/bdev/part.o 00:02:31.856 CC lib/bdev/scsi_nvme.o 00:02:31.856 LIB libspdk_event.a 00:02:32.115 SO libspdk_event.so.12.0 00:02:32.115 SYMLINK libspdk_event.so 00:02:32.115 SYMLINK libspdk_nvme.so 00:02:34.019 LIB libspdk_blob.a 00:02:34.278 SO libspdk_blob.so.10.1 00:02:34.278 SYMLINK libspdk_blob.so 00:02:34.536 CC lib/blobfs/blobfs.o 00:02:34.536 CC lib/blobfs/tree.o 00:02:34.536 CC lib/lvol/lvol.o 00:02:35.471 LIB libspdk_bdev.a 00:02:35.471 SO libspdk_bdev.so.14.0 00:02:35.471 SYMLINK libspdk_bdev.so 00:02:35.471 LIB libspdk_blobfs.a 00:02:35.471 SO libspdk_blobfs.so.9.0 00:02:35.729 CC lib/ublk/ublk.o 00:02:35.729 CC lib/ublk/ublk_rpc.o 00:02:35.729 CC lib/nvmf/ctrlr.o 00:02:35.729 CC lib/nvmf/ctrlr_discovery.o 00:02:35.729 CC lib/scsi/dev.o 00:02:35.729 CC lib/scsi/lun.o 00:02:35.729 CC lib/nbd/nbd.o 00:02:35.729 CC lib/ftl/ftl_core.o 00:02:35.729 SYMLINK libspdk_blobfs.so 00:02:35.729 CC lib/ftl/ftl_init.o 00:02:35.729 LIB libspdk_lvol.a 00:02:35.729 SO libspdk_lvol.so.9.1 00:02:35.729 CC lib/ftl/ftl_layout.o 00:02:35.729 SYMLINK libspdk_lvol.so 00:02:35.729 CC lib/ftl/ftl_debug.o 00:02:35.987 CC lib/ftl/ftl_io.o 00:02:35.987 CC lib/ftl/ftl_sb.o 00:02:35.987 CC lib/scsi/port.o 00:02:35.987 CC lib/scsi/scsi.o 00:02:36.245 CC lib/nbd/nbd_rpc.o 00:02:36.245 CC lib/scsi/scsi_bdev.o 00:02:36.245 CC lib/scsi/scsi_pr.o 00:02:36.246 CC lib/scsi/scsi_rpc.o 00:02:36.246 CC lib/ftl/ftl_l2p.o 00:02:36.246 CC lib/nvmf/ctrlr_bdev.o 00:02:36.246 CC lib/nvmf/subsystem.o 00:02:36.246 CC lib/nvmf/nvmf.o 00:02:36.246 CC lib/nvmf/nvmf_rpc.o 00:02:36.246 LIB libspdk_nbd.a 00:02:36.246 SO libspdk_nbd.so.6.0 00:02:36.504 CC lib/ftl/ftl_l2p_flat.o 00:02:36.504 SYMLINK libspdk_nbd.so 00:02:36.504 CC lib/ftl/ftl_nv_cache.o 00:02:36.504 LIB libspdk_ublk.a 00:02:36.504 SO libspdk_ublk.so.2.0 00:02:36.504 CC lib/nvmf/transport.o 00:02:36.504 SYMLINK libspdk_ublk.so 00:02:36.504 CC lib/scsi/task.o 00:02:36.504 CC lib/ftl/ftl_band.o 00:02:36.761 CC lib/nvmf/tcp.o 00:02:36.761 LIB libspdk_scsi.a 00:02:37.019 SO libspdk_scsi.so.8.0 00:02:37.019 SYMLINK libspdk_scsi.so 00:02:37.019 CC lib/nvmf/rdma.o 00:02:37.019 CC lib/ftl/ftl_band_ops.o 00:02:37.278 CC lib/iscsi/conn.o 00:02:37.278 CC lib/ftl/ftl_writer.o 00:02:37.278 CC lib/ftl/ftl_rq.o 00:02:37.278 CC lib/vhost/vhost.o 00:02:37.536 CC lib/vhost/vhost_rpc.o 00:02:37.536 CC lib/ftl/ftl_reloc.o 00:02:37.536 CC lib/ftl/ftl_l2p_cache.o 00:02:37.536 CC lib/iscsi/init_grp.o 00:02:37.536 CC lib/iscsi/iscsi.o 00:02:37.794 CC lib/iscsi/md5.o 00:02:38.053 CC lib/ftl/ftl_p2l.o 00:02:38.053 CC lib/iscsi/param.o 00:02:38.053 CC lib/iscsi/portal_grp.o 00:02:38.053 CC lib/iscsi/tgt_node.o 00:02:38.053 CC lib/vhost/vhost_scsi.o 00:02:38.311 CC lib/ftl/mngt/ftl_mngt.o 00:02:38.311 CC lib/iscsi/iscsi_subsystem.o 00:02:38.311 CC lib/iscsi/iscsi_rpc.o 00:02:38.311 CC lib/vhost/vhost_blk.o 00:02:38.311 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:38.573 CC lib/iscsi/task.o 00:02:38.573 CC lib/vhost/rte_vhost_user.o 00:02:38.573 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:38.834 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:38.834 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:38.834 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:38.834 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:38.834 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:38.834 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:39.093 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:39.093 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:39.093 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:39.093 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:39.350 CC lib/ftl/utils/ftl_conf.o 00:02:39.351 CC lib/ftl/utils/ftl_md.o 00:02:39.351 CC lib/ftl/utils/ftl_mempool.o 00:02:39.351 CC lib/ftl/utils/ftl_bitmap.o 00:02:39.351 CC lib/ftl/utils/ftl_property.o 00:02:39.351 LIB libspdk_iscsi.a 00:02:39.351 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:39.351 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:39.351 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:39.351 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:39.609 SO libspdk_iscsi.so.7.0 00:02:39.609 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:39.609 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:39.609 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:39.609 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:39.867 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:39.867 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:39.867 SYMLINK libspdk_iscsi.so 00:02:39.867 CC lib/ftl/base/ftl_base_dev.o 00:02:39.867 CC lib/ftl/base/ftl_base_bdev.o 00:02:39.867 CC lib/ftl/ftl_trace.o 00:02:39.867 LIB libspdk_vhost.a 00:02:39.867 SO libspdk_vhost.so.7.1 00:02:39.867 LIB libspdk_nvmf.a 00:02:40.125 SYMLINK libspdk_vhost.so 00:02:40.125 SO libspdk_nvmf.so.17.0 00:02:40.125 LIB libspdk_ftl.a 00:02:40.384 SYMLINK libspdk_nvmf.so 00:02:40.384 SO libspdk_ftl.so.8.0 00:02:40.950 SYMLINK libspdk_ftl.so 00:02:40.950 CC module/env_dpdk/env_dpdk_rpc.o 00:02:41.208 CC module/accel/ioat/accel_ioat.o 00:02:41.208 CC module/sock/posix/posix.o 00:02:41.208 CC module/accel/error/accel_error.o 00:02:41.208 CC module/accel/dsa/accel_dsa.o 00:02:41.208 CC module/scheduler/gscheduler/gscheduler.o 00:02:41.208 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:41.208 CC module/blob/bdev/blob_bdev.o 00:02:41.209 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:41.209 CC module/accel/iaa/accel_iaa.o 00:02:41.209 LIB libspdk_env_dpdk_rpc.a 00:02:41.209 SO libspdk_env_dpdk_rpc.so.5.0 00:02:41.209 SYMLINK libspdk_env_dpdk_rpc.so 00:02:41.209 CC module/accel/ioat/accel_ioat_rpc.o 00:02:41.209 LIB libspdk_scheduler_dpdk_governor.a 00:02:41.209 LIB libspdk_scheduler_gscheduler.a 00:02:41.209 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:41.209 SO libspdk_scheduler_gscheduler.so.3.0 00:02:41.209 CC module/accel/error/accel_error_rpc.o 00:02:41.209 LIB libspdk_scheduler_dynamic.a 00:02:41.467 SO libspdk_scheduler_dynamic.so.3.0 00:02:41.467 CC module/accel/dsa/accel_dsa_rpc.o 00:02:41.467 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:41.467 SYMLINK libspdk_scheduler_gscheduler.so 00:02:41.467 CC module/accel/iaa/accel_iaa_rpc.o 00:02:41.467 LIB libspdk_accel_ioat.a 00:02:41.467 SYMLINK libspdk_scheduler_dynamic.so 00:02:41.467 LIB libspdk_blob_bdev.a 00:02:41.467 SO libspdk_accel_ioat.so.5.0 00:02:41.467 SO libspdk_blob_bdev.so.10.1 00:02:41.467 LIB libspdk_accel_error.a 00:02:41.467 SYMLINK libspdk_accel_ioat.so 00:02:41.467 LIB libspdk_accel_dsa.a 00:02:41.467 SYMLINK libspdk_blob_bdev.so 00:02:41.467 LIB libspdk_accel_iaa.a 00:02:41.467 SO libspdk_accel_error.so.1.0 00:02:41.467 SO libspdk_accel_dsa.so.4.0 00:02:41.467 SO libspdk_accel_iaa.so.2.0 00:02:41.725 SYMLINK libspdk_accel_error.so 00:02:41.725 SYMLINK libspdk_accel_dsa.so 00:02:41.725 SYMLINK libspdk_accel_iaa.so 00:02:41.725 CC module/bdev/delay/vbdev_delay.o 00:02:41.725 CC module/bdev/lvol/vbdev_lvol.o 00:02:41.725 CC module/bdev/gpt/gpt.o 00:02:41.725 CC module/blobfs/bdev/blobfs_bdev.o 00:02:41.725 CC module/bdev/error/vbdev_error.o 00:02:41.725 CC module/bdev/malloc/bdev_malloc.o 00:02:41.725 CC module/bdev/null/bdev_null.o 00:02:41.725 CC module/bdev/nvme/bdev_nvme.o 00:02:41.725 CC module/bdev/passthru/vbdev_passthru.o 00:02:41.983 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:41.983 CC module/bdev/gpt/vbdev_gpt.o 00:02:41.983 LIB libspdk_sock_posix.a 00:02:41.983 CC module/bdev/error/vbdev_error_rpc.o 00:02:41.983 SO libspdk_sock_posix.so.5.0 00:02:41.983 CC module/bdev/null/bdev_null_rpc.o 00:02:41.983 LIB libspdk_blobfs_bdev.a 00:02:42.241 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:42.241 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:42.241 SO libspdk_blobfs_bdev.so.5.0 00:02:42.241 SYMLINK libspdk_sock_posix.so 00:02:42.241 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:42.241 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:42.241 LIB libspdk_bdev_error.a 00:02:42.241 SYMLINK libspdk_blobfs_bdev.so 00:02:42.241 CC module/bdev/nvme/nvme_rpc.o 00:02:42.241 LIB libspdk_bdev_gpt.a 00:02:42.241 SO libspdk_bdev_error.so.5.0 00:02:42.241 LIB libspdk_bdev_null.a 00:02:42.241 SO libspdk_bdev_gpt.so.5.0 00:02:42.241 SO libspdk_bdev_null.so.5.0 00:02:42.241 LIB libspdk_bdev_passthru.a 00:02:42.241 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:42.241 SYMLINK libspdk_bdev_error.so 00:02:42.241 LIB libspdk_bdev_delay.a 00:02:42.241 SO libspdk_bdev_passthru.so.5.0 00:02:42.241 SYMLINK libspdk_bdev_gpt.so 00:02:42.241 LIB libspdk_bdev_malloc.a 00:02:42.241 SO libspdk_bdev_delay.so.5.0 00:02:42.499 SYMLINK libspdk_bdev_null.so 00:02:42.499 SO libspdk_bdev_malloc.so.5.0 00:02:42.499 SYMLINK libspdk_bdev_passthru.so 00:02:42.499 CC module/bdev/raid/bdev_raid.o 00:02:42.499 SYMLINK libspdk_bdev_delay.so 00:02:42.499 CC module/bdev/split/vbdev_split.o 00:02:42.499 SYMLINK libspdk_bdev_malloc.so 00:02:42.499 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:42.499 CC module/bdev/nvme/bdev_mdns_client.o 00:02:42.499 CC module/bdev/xnvme/bdev_xnvme.o 00:02:42.499 CC module/bdev/aio/bdev_aio.o 00:02:42.499 CC module/bdev/ftl/bdev_ftl.o 00:02:42.757 LIB libspdk_bdev_lvol.a 00:02:42.757 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:42.757 SO libspdk_bdev_lvol.so.5.0 00:02:42.757 CC module/bdev/split/vbdev_split_rpc.o 00:02:42.757 SYMLINK libspdk_bdev_lvol.so 00:02:42.757 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:43.015 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:43.015 LIB libspdk_bdev_split.a 00:02:43.015 CC module/bdev/aio/bdev_aio_rpc.o 00:02:43.015 CC module/bdev/iscsi/bdev_iscsi.o 00:02:43.015 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:43.016 LIB libspdk_bdev_ftl.a 00:02:43.016 SO libspdk_bdev_split.so.5.0 00:02:43.016 CC module/bdev/raid/bdev_raid_rpc.o 00:02:43.016 SO libspdk_bdev_ftl.so.5.0 00:02:43.016 LIB libspdk_bdev_xnvme.a 00:02:43.016 SYMLINK libspdk_bdev_split.so 00:02:43.016 SYMLINK libspdk_bdev_ftl.so 00:02:43.016 CC module/bdev/nvme/vbdev_opal.o 00:02:43.016 SO libspdk_bdev_xnvme.so.2.0 00:02:43.016 LIB libspdk_bdev_aio.a 00:02:43.016 LIB libspdk_bdev_zone_block.a 00:02:43.016 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:43.016 SO libspdk_bdev_aio.so.5.0 00:02:43.016 SO libspdk_bdev_zone_block.so.5.0 00:02:43.016 SYMLINK libspdk_bdev_xnvme.so 00:02:43.016 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:43.274 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:43.274 SYMLINK libspdk_bdev_zone_block.so 00:02:43.274 SYMLINK libspdk_bdev_aio.so 00:02:43.274 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:43.274 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:43.274 CC module/bdev/raid/bdev_raid_sb.o 00:02:43.274 CC module/bdev/raid/raid0.o 00:02:43.274 LIB libspdk_bdev_iscsi.a 00:02:43.274 CC module/bdev/raid/raid1.o 00:02:43.274 SO libspdk_bdev_iscsi.so.5.0 00:02:43.274 CC module/bdev/raid/concat.o 00:02:43.532 SYMLINK libspdk_bdev_iscsi.so 00:02:43.791 LIB libspdk_bdev_raid.a 00:02:43.791 SO libspdk_bdev_raid.so.5.0 00:02:43.791 LIB libspdk_bdev_virtio.a 00:02:43.791 SO libspdk_bdev_virtio.so.5.0 00:02:43.791 SYMLINK libspdk_bdev_raid.so 00:02:44.050 SYMLINK libspdk_bdev_virtio.so 00:02:44.616 LIB libspdk_bdev_nvme.a 00:02:44.616 SO libspdk_bdev_nvme.so.6.0 00:02:44.875 SYMLINK libspdk_bdev_nvme.so 00:02:45.133 CC module/event/subsystems/iobuf/iobuf.o 00:02:45.133 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:45.133 CC module/event/subsystems/sock/sock.o 00:02:45.133 CC module/event/subsystems/vmd/vmd.o 00:02:45.133 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:45.133 CC module/event/subsystems/scheduler/scheduler.o 00:02:45.133 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:45.392 LIB libspdk_event_sock.a 00:02:45.392 LIB libspdk_event_scheduler.a 00:02:45.392 LIB libspdk_event_vhost_blk.a 00:02:45.392 LIB libspdk_event_iobuf.a 00:02:45.392 SO libspdk_event_sock.so.4.0 00:02:45.392 LIB libspdk_event_vmd.a 00:02:45.392 SO libspdk_event_scheduler.so.3.0 00:02:45.392 SO libspdk_event_vhost_blk.so.2.0 00:02:45.392 SO libspdk_event_iobuf.so.2.0 00:02:45.392 SO libspdk_event_vmd.so.5.0 00:02:45.392 SYMLINK libspdk_event_sock.so 00:02:45.392 SYMLINK libspdk_event_scheduler.so 00:02:45.392 SYMLINK libspdk_event_vhost_blk.so 00:02:45.392 SYMLINK libspdk_event_iobuf.so 00:02:45.392 SYMLINK libspdk_event_vmd.so 00:02:45.650 CC module/event/subsystems/accel/accel.o 00:02:45.650 LIB libspdk_event_accel.a 00:02:45.908 SO libspdk_event_accel.so.5.0 00:02:45.908 SYMLINK libspdk_event_accel.so 00:02:45.908 CC module/event/subsystems/bdev/bdev.o 00:02:46.166 LIB libspdk_event_bdev.a 00:02:46.166 SO libspdk_event_bdev.so.5.0 00:02:46.424 SYMLINK libspdk_event_bdev.so 00:02:46.424 CC module/event/subsystems/nbd/nbd.o 00:02:46.424 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:46.424 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:46.424 CC module/event/subsystems/scsi/scsi.o 00:02:46.424 CC module/event/subsystems/ublk/ublk.o 00:02:46.682 LIB libspdk_event_nbd.a 00:02:46.682 LIB libspdk_event_scsi.a 00:02:46.682 SO libspdk_event_nbd.so.5.0 00:02:46.682 LIB libspdk_event_ublk.a 00:02:46.682 SO libspdk_event_scsi.so.5.0 00:02:46.682 SO libspdk_event_ublk.so.2.0 00:02:46.682 LIB libspdk_event_nvmf.a 00:02:46.682 SYMLINK libspdk_event_nbd.so 00:02:46.682 SYMLINK libspdk_event_ublk.so 00:02:46.682 SYMLINK libspdk_event_scsi.so 00:02:46.682 SO libspdk_event_nvmf.so.5.0 00:02:46.941 SYMLINK libspdk_event_nvmf.so 00:02:46.941 CC module/event/subsystems/iscsi/iscsi.o 00:02:46.941 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:47.199 LIB libspdk_event_vhost_scsi.a 00:02:47.199 LIB libspdk_event_iscsi.a 00:02:47.200 SO libspdk_event_vhost_scsi.so.2.0 00:02:47.200 SO libspdk_event_iscsi.so.5.0 00:02:47.200 SYMLINK libspdk_event_vhost_scsi.so 00:02:47.200 SYMLINK libspdk_event_iscsi.so 00:02:47.458 SO libspdk.so.5.0 00:02:47.458 SYMLINK libspdk.so 00:02:47.458 CXX app/trace/trace.o 00:02:47.458 CC examples/vmd/lsvmd/lsvmd.o 00:02:47.458 CC examples/nvme/hello_world/hello_world.o 00:02:47.458 CC examples/ioat/perf/perf.o 00:02:47.458 CC examples/accel/perf/accel_perf.o 00:02:47.716 CC examples/sock/hello_world/hello_sock.o 00:02:47.716 CC examples/blob/hello_world/hello_blob.o 00:02:47.716 CC examples/nvmf/nvmf/nvmf.o 00:02:47.716 CC examples/bdev/hello_world/hello_bdev.o 00:02:47.716 CC test/accel/dif/dif.o 00:02:47.716 LINK lsvmd 00:02:47.974 LINK hello_blob 00:02:47.974 LINK ioat_perf 00:02:47.974 LINK hello_world 00:02:47.974 LINK hello_sock 00:02:47.974 LINK hello_bdev 00:02:47.974 CC examples/vmd/led/led.o 00:02:47.974 LINK nvmf 00:02:47.974 LINK spdk_trace 00:02:47.974 CC examples/ioat/verify/verify.o 00:02:48.233 CC examples/nvme/reconnect/reconnect.o 00:02:48.233 LINK led 00:02:48.233 LINK dif 00:02:48.233 LINK accel_perf 00:02:48.233 CC examples/blob/cli/blobcli.o 00:02:48.233 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:48.233 CC app/trace_record/trace_record.o 00:02:48.233 CC examples/bdev/bdevperf/bdevperf.o 00:02:48.233 LINK verify 00:02:48.491 CC examples/util/zipf/zipf.o 00:02:48.491 CC test/app/bdev_svc/bdev_svc.o 00:02:48.491 CC test/bdev/bdevio/bdevio.o 00:02:48.491 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:48.491 LINK zipf 00:02:48.491 LINK reconnect 00:02:48.491 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:48.491 LINK spdk_trace_record 00:02:48.748 LINK bdev_svc 00:02:48.748 LINK blobcli 00:02:48.748 LINK nvme_manage 00:02:48.748 CC app/nvmf_tgt/nvmf_main.o 00:02:48.748 CC app/iscsi_tgt/iscsi_tgt.o 00:02:48.748 CC test/blobfs/mkfs/mkfs.o 00:02:49.006 LINK bdevio 00:02:49.006 LINK nvme_fuzz 00:02:49.006 LINK nvmf_tgt 00:02:49.006 TEST_HEADER include/spdk/accel.h 00:02:49.006 TEST_HEADER include/spdk/accel_module.h 00:02:49.006 TEST_HEADER include/spdk/assert.h 00:02:49.006 TEST_HEADER include/spdk/barrier.h 00:02:49.006 TEST_HEADER include/spdk/base64.h 00:02:49.006 TEST_HEADER include/spdk/bdev.h 00:02:49.006 TEST_HEADER include/spdk/bdev_module.h 00:02:49.006 TEST_HEADER include/spdk/bdev_zone.h 00:02:49.006 TEST_HEADER include/spdk/bit_array.h 00:02:49.006 CC examples/thread/thread/thread_ex.o 00:02:49.006 TEST_HEADER include/spdk/bit_pool.h 00:02:49.006 LINK iscsi_tgt 00:02:49.006 CC examples/nvme/arbitration/arbitration.o 00:02:49.006 TEST_HEADER include/spdk/blob_bdev.h 00:02:49.007 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:49.007 LINK mkfs 00:02:49.007 TEST_HEADER include/spdk/blobfs.h 00:02:49.007 TEST_HEADER include/spdk/blob.h 00:02:49.007 TEST_HEADER include/spdk/conf.h 00:02:49.007 TEST_HEADER include/spdk/config.h 00:02:49.007 TEST_HEADER include/spdk/cpuset.h 00:02:49.007 TEST_HEADER include/spdk/crc16.h 00:02:49.007 TEST_HEADER include/spdk/crc32.h 00:02:49.007 TEST_HEADER include/spdk/crc64.h 00:02:49.007 TEST_HEADER include/spdk/dif.h 00:02:49.007 TEST_HEADER include/spdk/dma.h 00:02:49.007 TEST_HEADER include/spdk/endian.h 00:02:49.007 TEST_HEADER include/spdk/env_dpdk.h 00:02:49.007 TEST_HEADER include/spdk/env.h 00:02:49.007 TEST_HEADER include/spdk/event.h 00:02:49.007 TEST_HEADER include/spdk/fd_group.h 00:02:49.007 TEST_HEADER include/spdk/fd.h 00:02:49.007 TEST_HEADER include/spdk/file.h 00:02:49.007 TEST_HEADER include/spdk/ftl.h 00:02:49.007 TEST_HEADER include/spdk/gpt_spec.h 00:02:49.007 TEST_HEADER include/spdk/hexlify.h 00:02:49.007 TEST_HEADER include/spdk/histogram_data.h 00:02:49.007 TEST_HEADER include/spdk/idxd.h 00:02:49.265 TEST_HEADER include/spdk/idxd_spec.h 00:02:49.265 TEST_HEADER include/spdk/init.h 00:02:49.265 TEST_HEADER include/spdk/ioat.h 00:02:49.265 TEST_HEADER include/spdk/ioat_spec.h 00:02:49.265 TEST_HEADER include/spdk/iscsi_spec.h 00:02:49.265 TEST_HEADER include/spdk/json.h 00:02:49.265 TEST_HEADER include/spdk/jsonrpc.h 00:02:49.265 TEST_HEADER include/spdk/likely.h 00:02:49.265 TEST_HEADER include/spdk/log.h 00:02:49.265 TEST_HEADER include/spdk/lvol.h 00:02:49.265 TEST_HEADER include/spdk/memory.h 00:02:49.265 TEST_HEADER include/spdk/mmio.h 00:02:49.265 TEST_HEADER include/spdk/nbd.h 00:02:49.265 TEST_HEADER include/spdk/notify.h 00:02:49.265 TEST_HEADER include/spdk/nvme.h 00:02:49.265 TEST_HEADER include/spdk/nvme_intel.h 00:02:49.265 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:49.265 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:49.265 TEST_HEADER include/spdk/nvme_spec.h 00:02:49.265 TEST_HEADER include/spdk/nvme_zns.h 00:02:49.265 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:49.265 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:49.265 TEST_HEADER include/spdk/nvmf.h 00:02:49.265 TEST_HEADER include/spdk/nvmf_spec.h 00:02:49.265 TEST_HEADER include/spdk/nvmf_transport.h 00:02:49.265 TEST_HEADER include/spdk/opal.h 00:02:49.265 TEST_HEADER include/spdk/opal_spec.h 00:02:49.265 TEST_HEADER include/spdk/pci_ids.h 00:02:49.265 TEST_HEADER include/spdk/pipe.h 00:02:49.265 TEST_HEADER include/spdk/queue.h 00:02:49.265 TEST_HEADER include/spdk/reduce.h 00:02:49.265 TEST_HEADER include/spdk/rpc.h 00:02:49.265 TEST_HEADER include/spdk/scheduler.h 00:02:49.265 TEST_HEADER include/spdk/scsi.h 00:02:49.265 TEST_HEADER include/spdk/scsi_spec.h 00:02:49.265 TEST_HEADER include/spdk/sock.h 00:02:49.265 TEST_HEADER include/spdk/stdinc.h 00:02:49.265 TEST_HEADER include/spdk/string.h 00:02:49.265 TEST_HEADER include/spdk/thread.h 00:02:49.265 TEST_HEADER include/spdk/trace.h 00:02:49.265 TEST_HEADER include/spdk/trace_parser.h 00:02:49.265 TEST_HEADER include/spdk/tree.h 00:02:49.265 TEST_HEADER include/spdk/ublk.h 00:02:49.265 TEST_HEADER include/spdk/util.h 00:02:49.265 TEST_HEADER include/spdk/uuid.h 00:02:49.265 TEST_HEADER include/spdk/version.h 00:02:49.265 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:49.265 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:49.265 TEST_HEADER include/spdk/vhost.h 00:02:49.265 TEST_HEADER include/spdk/vmd.h 00:02:49.265 TEST_HEADER include/spdk/xor.h 00:02:49.265 TEST_HEADER include/spdk/zipf.h 00:02:49.265 CXX test/cpp_headers/accel.o 00:02:49.265 LINK bdevperf 00:02:49.265 CXX test/cpp_headers/accel_module.o 00:02:49.265 CC examples/nvme/hotplug/hotplug.o 00:02:49.265 CXX test/cpp_headers/assert.o 00:02:49.265 CXX test/cpp_headers/barrier.o 00:02:49.265 LINK thread 00:02:49.523 CXX test/cpp_headers/base64.o 00:02:49.523 CC app/spdk_tgt/spdk_tgt.o 00:02:49.523 LINK arbitration 00:02:49.523 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:49.523 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:49.523 CXX test/cpp_headers/bdev.o 00:02:49.523 LINK hotplug 00:02:49.523 CXX test/cpp_headers/bdev_module.o 00:02:49.781 CC test/dma/test_dma/test_dma.o 00:02:49.781 LINK spdk_tgt 00:02:49.781 CC test/event/event_perf/event_perf.o 00:02:49.781 CC test/env/vtophys/vtophys.o 00:02:49.781 CXX test/cpp_headers/bdev_zone.o 00:02:49.781 CC test/env/mem_callbacks/mem_callbacks.o 00:02:49.781 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:49.781 CC test/lvol/esnap/esnap.o 00:02:50.039 LINK vtophys 00:02:50.039 LINK event_perf 00:02:50.039 CC app/spdk_lspci/spdk_lspci.o 00:02:50.039 CXX test/cpp_headers/bit_array.o 00:02:50.039 LINK cmb_copy 00:02:50.039 LINK vhost_fuzz 00:02:50.039 LINK test_dma 00:02:50.039 CC test/event/reactor/reactor.o 00:02:50.297 LINK spdk_lspci 00:02:50.297 CC test/event/reactor_perf/reactor_perf.o 00:02:50.297 CXX test/cpp_headers/bit_pool.o 00:02:50.297 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:50.297 CC examples/nvme/abort/abort.o 00:02:50.297 LINK reactor 00:02:50.297 LINK reactor_perf 00:02:50.297 CC test/env/memory/memory_ut.o 00:02:50.297 CXX test/cpp_headers/blob_bdev.o 00:02:50.556 LINK env_dpdk_post_init 00:02:50.556 LINK mem_callbacks 00:02:50.556 CC app/spdk_nvme_perf/perf.o 00:02:50.556 CXX test/cpp_headers/blobfs_bdev.o 00:02:50.556 CC test/event/app_repeat/app_repeat.o 00:02:50.819 CC app/spdk_nvme_identify/identify.o 00:02:50.819 CC test/event/scheduler/scheduler.o 00:02:50.819 CC app/spdk_nvme_discover/discovery_aer.o 00:02:50.819 CXX test/cpp_headers/blobfs.o 00:02:50.819 LINK iscsi_fuzz 00:02:50.819 LINK abort 00:02:50.819 LINK app_repeat 00:02:51.077 LINK scheduler 00:02:51.077 CXX test/cpp_headers/blob.o 00:02:51.077 LINK spdk_nvme_discover 00:02:51.077 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:51.077 CC app/spdk_top/spdk_top.o 00:02:51.077 CC test/app/histogram_perf/histogram_perf.o 00:02:51.077 CXX test/cpp_headers/conf.o 00:02:51.077 CXX test/cpp_headers/config.o 00:02:51.335 LINK pmr_persistence 00:02:51.335 CC test/rpc_client/rpc_client_test.o 00:02:51.335 LINK histogram_perf 00:02:51.335 CC test/nvme/aer/aer.o 00:02:51.335 CXX test/cpp_headers/cpuset.o 00:02:51.335 LINK memory_ut 00:02:51.593 LINK rpc_client_test 00:02:51.593 LINK spdk_nvme_perf 00:02:51.593 CXX test/cpp_headers/crc16.o 00:02:51.593 CC test/app/jsoncat/jsoncat.o 00:02:51.593 CC examples/idxd/perf/perf.o 00:02:51.593 LINK aer 00:02:51.593 CXX test/cpp_headers/crc32.o 00:02:51.593 LINK jsoncat 00:02:51.850 CC test/env/pci/pci_ut.o 00:02:51.850 LINK spdk_nvme_identify 00:02:51.850 CC test/nvme/reset/reset.o 00:02:51.850 CC test/thread/poller_perf/poller_perf.o 00:02:51.850 CXX test/cpp_headers/crc64.o 00:02:51.850 CC test/app/stub/stub.o 00:02:51.850 LINK idxd_perf 00:02:51.850 LINK poller_perf 00:02:51.850 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:52.108 CC app/vhost/vhost.o 00:02:52.108 CXX test/cpp_headers/dif.o 00:02:52.108 LINK reset 00:02:52.108 LINK stub 00:02:52.108 LINK interrupt_tgt 00:02:52.108 LINK spdk_top 00:02:52.108 CC app/spdk_dd/spdk_dd.o 00:02:52.366 CC app/fio/nvme/fio_plugin.o 00:02:52.366 LINK pci_ut 00:02:52.366 LINK vhost 00:02:52.366 CXX test/cpp_headers/dma.o 00:02:52.366 CC test/nvme/sgl/sgl.o 00:02:52.366 CC test/nvme/e2edp/nvme_dp.o 00:02:52.366 CXX test/cpp_headers/endian.o 00:02:52.366 CXX test/cpp_headers/env_dpdk.o 00:02:52.366 CC app/fio/bdev/fio_plugin.o 00:02:52.624 CC test/nvme/overhead/overhead.o 00:02:52.624 CXX test/cpp_headers/env.o 00:02:52.624 CXX test/cpp_headers/event.o 00:02:52.624 CC test/nvme/err_injection/err_injection.o 00:02:52.624 LINK spdk_dd 00:02:52.624 LINK sgl 00:02:52.624 LINK nvme_dp 00:02:52.882 CXX test/cpp_headers/fd_group.o 00:02:52.882 CC test/nvme/startup/startup.o 00:02:52.882 LINK err_injection 00:02:52.882 LINK overhead 00:02:52.882 LINK spdk_nvme 00:02:52.882 CC test/nvme/reserve/reserve.o 00:02:52.882 CC test/nvme/simple_copy/simple_copy.o 00:02:52.882 CXX test/cpp_headers/fd.o 00:02:52.882 CC test/nvme/connect_stress/connect_stress.o 00:02:52.882 LINK startup 00:02:53.140 CC test/nvme/boot_partition/boot_partition.o 00:02:53.140 CC test/nvme/compliance/nvme_compliance.o 00:02:53.140 LINK spdk_bdev 00:02:53.140 CC test/nvme/fused_ordering/fused_ordering.o 00:02:53.140 CXX test/cpp_headers/file.o 00:02:53.140 LINK connect_stress 00:02:53.140 LINK reserve 00:02:53.140 LINK simple_copy 00:02:53.140 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:53.140 LINK boot_partition 00:02:53.399 CC test/nvme/fdp/fdp.o 00:02:53.399 CXX test/cpp_headers/ftl.o 00:02:53.399 LINK fused_ordering 00:02:53.399 CXX test/cpp_headers/gpt_spec.o 00:02:53.399 CXX test/cpp_headers/hexlify.o 00:02:53.399 CC test/nvme/cuse/cuse.o 00:02:53.399 LINK doorbell_aers 00:02:53.399 CXX test/cpp_headers/histogram_data.o 00:02:53.399 LINK nvme_compliance 00:02:53.657 CXX test/cpp_headers/idxd.o 00:02:53.657 CXX test/cpp_headers/idxd_spec.o 00:02:53.657 CXX test/cpp_headers/init.o 00:02:53.657 CXX test/cpp_headers/ioat.o 00:02:53.657 CXX test/cpp_headers/ioat_spec.o 00:02:53.657 LINK fdp 00:02:53.657 CXX test/cpp_headers/iscsi_spec.o 00:02:53.657 CXX test/cpp_headers/json.o 00:02:53.657 CXX test/cpp_headers/jsonrpc.o 00:02:53.657 CXX test/cpp_headers/likely.o 00:02:53.915 CXX test/cpp_headers/log.o 00:02:53.915 CXX test/cpp_headers/lvol.o 00:02:53.915 CXX test/cpp_headers/memory.o 00:02:53.915 CXX test/cpp_headers/mmio.o 00:02:53.915 CXX test/cpp_headers/nbd.o 00:02:53.915 CXX test/cpp_headers/notify.o 00:02:53.915 CXX test/cpp_headers/nvme.o 00:02:53.915 CXX test/cpp_headers/nvme_intel.o 00:02:53.915 CXX test/cpp_headers/nvme_ocssd.o 00:02:53.915 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:53.915 CXX test/cpp_headers/nvme_spec.o 00:02:53.915 CXX test/cpp_headers/nvme_zns.o 00:02:53.915 CXX test/cpp_headers/nvmf_cmd.o 00:02:54.173 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:54.173 CXX test/cpp_headers/nvmf.o 00:02:54.173 CXX test/cpp_headers/nvmf_spec.o 00:02:54.173 CXX test/cpp_headers/nvmf_transport.o 00:02:54.173 CXX test/cpp_headers/opal.o 00:02:54.173 CXX test/cpp_headers/opal_spec.o 00:02:54.173 CXX test/cpp_headers/pci_ids.o 00:02:54.173 CXX test/cpp_headers/pipe.o 00:02:54.173 CXX test/cpp_headers/queue.o 00:02:54.431 CXX test/cpp_headers/reduce.o 00:02:54.431 CXX test/cpp_headers/rpc.o 00:02:54.431 CXX test/cpp_headers/scheduler.o 00:02:54.431 CXX test/cpp_headers/scsi.o 00:02:54.431 CXX test/cpp_headers/scsi_spec.o 00:02:54.431 CXX test/cpp_headers/sock.o 00:02:54.431 CXX test/cpp_headers/stdinc.o 00:02:54.431 CXX test/cpp_headers/string.o 00:02:54.431 CXX test/cpp_headers/thread.o 00:02:54.431 CXX test/cpp_headers/trace.o 00:02:54.431 CXX test/cpp_headers/trace_parser.o 00:02:54.431 CXX test/cpp_headers/tree.o 00:02:54.431 CXX test/cpp_headers/ublk.o 00:02:54.431 CXX test/cpp_headers/util.o 00:02:54.431 CXX test/cpp_headers/uuid.o 00:02:54.689 CXX test/cpp_headers/version.o 00:02:54.689 CXX test/cpp_headers/vfio_user_pci.o 00:02:54.689 CXX test/cpp_headers/vfio_user_spec.o 00:02:54.689 CXX test/cpp_headers/vhost.o 00:02:54.689 CXX test/cpp_headers/vmd.o 00:02:54.689 CXX test/cpp_headers/xor.o 00:02:54.689 LINK cuse 00:02:54.689 CXX test/cpp_headers/zipf.o 00:02:56.062 LINK esnap 00:02:56.627 00:02:56.627 real 1m9.659s 00:02:56.627 user 7m10.498s 00:02:56.627 sys 1m24.293s 00:02:56.627 09:40:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:56.627 09:40:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.627 ************************************ 00:02:56.627 END TEST make 00:02:56.627 ************************************ 00:02:56.627 09:40:50 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:56.627 09:40:50 -- nvmf/common.sh@7 -- # uname -s 00:02:56.627 09:40:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:56.627 09:40:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:56.627 09:40:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:56.627 09:40:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:56.627 09:40:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:56.627 09:40:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:56.627 09:40:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:56.627 09:40:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:56.627 09:40:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:56.627 09:40:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:56.627 09:40:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ba251500-b233-4587-8b38-2bc1a120701d 00:02:56.627 09:40:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=ba251500-b233-4587-8b38-2bc1a120701d 00:02:56.627 09:40:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:56.627 09:40:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:56.627 09:40:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:56.627 09:40:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:56.627 09:40:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:56.627 09:40:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.627 09:40:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.627 09:40:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.627 09:40:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.627 09:40:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.627 09:40:50 -- paths/export.sh@5 -- # export PATH 00:02:56.627 09:40:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.627 09:40:50 -- nvmf/common.sh@46 -- # : 0 00:02:56.627 09:40:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:56.627 09:40:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:56.627 09:40:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:56.627 09:40:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:56.627 09:40:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:56.627 09:40:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:56.627 09:40:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:56.627 09:40:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:56.627 09:40:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:56.627 09:40:50 -- spdk/autotest.sh@32 -- # uname -s 00:02:56.627 09:40:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:56.627 09:40:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:56.627 09:40:50 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:56.627 09:40:50 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:56.627 09:40:50 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:56.627 09:40:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:56.627 09:40:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:56.627 09:40:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:56.627 09:40:50 -- spdk/autotest.sh@48 -- # udevadm_pid=48354 00:02:56.628 09:40:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:56.628 09:40:50 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:56.628 09:40:50 -- spdk/autotest.sh@54 -- # echo 48368 00:02:56.628 09:40:50 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:56.628 09:40:50 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:56.628 09:40:50 -- spdk/autotest.sh@56 -- # echo 48374 00:02:56.628 09:40:50 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:56.628 09:40:50 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:56.628 09:40:50 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:56.628 09:40:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:56.628 09:40:50 -- common/autotest_common.sh@10 -- # set +x 00:02:56.628 09:40:50 -- spdk/autotest.sh@70 -- # create_test_list 00:02:56.628 09:40:50 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:56.628 09:40:50 -- common/autotest_common.sh@10 -- # set +x 00:02:56.887 09:40:50 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:56.887 09:40:50 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:56.887 09:40:50 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:56.887 09:40:50 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:56.887 09:40:50 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:56.887 09:40:50 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:56.887 09:40:50 -- common/autotest_common.sh@1440 -- # uname 00:02:56.887 09:40:50 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:56.887 09:40:50 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:56.887 09:40:50 -- common/autotest_common.sh@1460 -- # uname 00:02:56.887 09:40:50 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:56.887 09:40:50 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:56.887 09:40:50 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:56.887 09:40:50 -- spdk/autotest.sh@83 -- # hash lcov 00:02:56.887 09:40:50 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:56.887 09:40:50 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:56.887 --rc lcov_branch_coverage=1 00:02:56.887 --rc lcov_function_coverage=1 00:02:56.887 --rc genhtml_branch_coverage=1 00:02:56.887 --rc genhtml_function_coverage=1 00:02:56.887 --rc genhtml_legend=1 00:02:56.887 --rc geninfo_all_blocks=1 00:02:56.887 ' 00:02:56.887 09:40:50 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:56.887 --rc lcov_branch_coverage=1 00:02:56.887 --rc lcov_function_coverage=1 00:02:56.887 --rc genhtml_branch_coverage=1 00:02:56.887 --rc genhtml_function_coverage=1 00:02:56.887 --rc genhtml_legend=1 00:02:56.887 --rc geninfo_all_blocks=1 00:02:56.887 ' 00:02:56.887 09:40:50 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:56.887 --rc lcov_branch_coverage=1 00:02:56.887 --rc lcov_function_coverage=1 00:02:56.887 --rc genhtml_branch_coverage=1 00:02:56.887 --rc genhtml_function_coverage=1 00:02:56.887 --rc genhtml_legend=1 00:02:56.887 --rc geninfo_all_blocks=1 00:02:56.887 --no-external' 00:02:56.887 09:40:50 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:56.887 --rc lcov_branch_coverage=1 00:02:56.887 --rc lcov_function_coverage=1 00:02:56.887 --rc genhtml_branch_coverage=1 00:02:56.887 --rc genhtml_function_coverage=1 00:02:56.887 --rc genhtml_legend=1 00:02:56.887 --rc geninfo_all_blocks=1 00:02:56.887 --no-external' 00:02:56.887 09:40:50 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:56.887 lcov: LCOV version 1.14 00:02:56.887 09:40:50 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:04.995 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:04.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:04.995 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:04.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:04.995 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:04.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:23.088 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:23.088 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:23.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:23.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:26.392 09:41:19 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:26.392 09:41:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:26.392 09:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:26.392 09:41:19 -- spdk/autotest.sh@102 -- # rm -f 00:03:26.392 09:41:19 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:26.651 lsblk: /dev/nvme3c3n1: not a block device 00:03:26.909 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.909 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:26.909 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:27.168 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:27.168 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:27.168 09:41:20 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:27.168 09:41:20 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:27.168 09:41:20 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:27.168 09:41:20 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:27.168 09:41:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:27.168 09:41:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:27.168 09:41:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:27.168 09:41:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:27.168 09:41:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:03:27.168 09:41:20 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:03:27.168 09:41:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:27.168 09:41:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:03:27.168 09:41:20 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:03:27.168 09:41:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:27.168 09:41:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:27.168 09:41:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:03:27.168 09:41:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:27.168 09:41:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:27.168 09:41:20 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:27.168 09:41:20 -- spdk/autotest.sh@121 -- # grep -v p 00:03:27.168 09:41:20 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:27.168 09:41:20 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:27.168 09:41:20 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:27.168 09:41:20 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:27.168 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:27.168 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:27.168 No valid GPT data, bailing 00:03:27.168 09:41:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.168 09:41:20 -- scripts/common.sh@393 -- # pt= 00:03:27.168 09:41:20 -- scripts/common.sh@394 -- # return 1 00:03:27.168 09:41:20 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:27.168 1+0 records in 00:03:27.168 1+0 records out 00:03:27.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114886 s, 91.3 MB/s 00:03:27.168 09:41:20 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:27.168 09:41:20 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:27.168 09:41:20 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:03:27.168 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:27.168 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:27.168 No valid GPT data, bailing 00:03:27.168 09:41:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:27.168 09:41:20 -- scripts/common.sh@393 -- # pt= 00:03:27.168 09:41:20 -- scripts/common.sh@394 -- # return 1 00:03:27.169 09:41:20 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:27.169 1+0 records in 00:03:27.169 1+0 records out 00:03:27.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426526 s, 246 MB/s 00:03:27.169 09:41:20 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:27.169 09:41:20 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:27.169 09:41:20 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n1 00:03:27.169 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:27.169 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:27.427 No valid GPT data, bailing 00:03:27.427 09:41:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:27.427 09:41:20 -- scripts/common.sh@393 -- # pt= 00:03:27.427 09:41:20 -- scripts/common.sh@394 -- # return 1 00:03:27.427 09:41:20 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:27.427 1+0 records in 00:03:27.427 1+0 records out 00:03:27.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422622 s, 248 MB/s 00:03:27.427 09:41:20 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:27.427 09:41:20 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:27.427 09:41:20 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n2 00:03:27.427 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:27.427 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:27.427 No valid GPT data, bailing 00:03:27.427 09:41:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:27.427 09:41:21 -- scripts/common.sh@393 -- # pt= 00:03:27.427 09:41:21 -- scripts/common.sh@394 -- # return 1 00:03:27.427 09:41:21 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:27.427 1+0 records in 00:03:27.427 1+0 records out 00:03:27.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384039 s, 273 MB/s 00:03:27.427 09:41:21 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:27.427 09:41:21 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:27.427 09:41:21 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n3 00:03:27.427 09:41:21 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:27.427 09:41:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:27.427 No valid GPT data, bailing 00:03:27.427 09:41:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:27.427 09:41:21 -- scripts/common.sh@393 -- # pt= 00:03:27.427 09:41:21 -- scripts/common.sh@394 -- # return 1 00:03:27.427 09:41:21 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:27.427 1+0 records in 00:03:27.427 1+0 records out 00:03:27.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00342464 s, 306 MB/s 00:03:27.427 09:41:21 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:27.427 09:41:21 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:27.427 09:41:21 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n1 00:03:27.427 09:41:21 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:27.427 09:41:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:27.685 No valid GPT data, bailing 00:03:27.685 09:41:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:27.685 09:41:21 -- scripts/common.sh@393 -- # pt= 00:03:27.685 09:41:21 -- scripts/common.sh@394 -- # return 1 00:03:27.685 09:41:21 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:27.685 1+0 records in 00:03:27.685 1+0 records out 00:03:27.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00359044 s, 292 MB/s 00:03:27.685 09:41:21 -- spdk/autotest.sh@129 -- # sync 00:03:27.685 09:41:21 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.685 09:41:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.685 09:41:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:29.587 09:41:23 -- spdk/autotest.sh@135 -- # uname -s 00:03:29.587 09:41:23 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:29.587 09:41:23 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:29.587 09:41:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.587 09:41:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.587 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:03:29.587 ************************************ 00:03:29.587 START TEST setup.sh 00:03:29.587 ************************************ 00:03:29.587 09:41:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:29.587 * Looking for test storage... 00:03:29.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.587 09:41:23 -- setup/test-setup.sh@10 -- # uname -s 00:03:29.587 09:41:23 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:29.587 09:41:23 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:29.587 09:41:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.587 09:41:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.587 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:03:29.587 ************************************ 00:03:29.587 START TEST acl 00:03:29.587 ************************************ 00:03:29.587 09:41:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:29.587 * Looking for test storage... 00:03:29.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.587 09:41:23 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:29.587 09:41:23 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:29.587 09:41:23 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:29.587 09:41:23 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:29.587 09:41:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.587 09:41:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.587 09:41:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.587 09:41:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.587 09:41:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:03:29.587 09:41:23 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:03:29.587 09:41:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.587 09:41:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:03:29.587 09:41:23 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:03:29.587 09:41:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.587 09:41:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.587 09:41:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:03:29.587 09:41:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:29.587 09:41:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.587 09:41:23 -- setup/acl.sh@12 -- # devs=() 00:03:29.587 09:41:23 -- setup/acl.sh@12 -- # declare -a devs 00:03:29.587 09:41:23 -- setup/acl.sh@13 -- # drivers=() 00:03:29.587 09:41:23 -- setup/acl.sh@13 -- # declare -A drivers 00:03:29.588 09:41:23 -- setup/acl.sh@51 -- # setup reset 00:03:29.588 09:41:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.588 09:41:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.962 09:41:24 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:30.962 09:41:24 -- setup/acl.sh@16 -- # local dev driver 00:03:30.962 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.962 09:41:24 -- setup/acl.sh@15 -- # setup output status 00:03:30.962 09:41:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.962 09:41:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:30.962 Hugepages 00:03:30.962 node hugesize free / total 00:03:30.962 09:41:24 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:30.962 09:41:24 -- setup/acl.sh@19 -- # continue 00:03:30.962 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.962 00:03:30.962 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:30.962 09:41:24 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:30.962 09:41:24 -- setup/acl.sh@19 -- # continue 00:03:30.962 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.962 09:41:24 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:30.962 09:41:24 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:30.962 09:41:24 -- setup/acl.sh@20 -- # continue 00:03:30.962 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.220 09:41:24 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:31.220 09:41:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.220 09:41:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:31.220 09:41:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.220 09:41:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.220 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.220 09:41:24 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:31.220 09:41:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.220 09:41:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:31.220 09:41:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.220 09:41:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.220 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.477 09:41:24 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:31.477 09:41:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.477 09:41:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:31.477 09:41:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.477 09:41:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.477 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.477 09:41:25 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:31.477 09:41:25 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.477 09:41:25 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:31.477 09:41:25 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.477 09:41:25 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.477 09:41:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.477 09:41:25 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:31.477 09:41:25 -- setup/acl.sh@54 -- # run_test denied denied 00:03:31.477 09:41:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.477 09:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.477 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:03:31.477 ************************************ 00:03:31.477 START TEST denied 00:03:31.477 ************************************ 00:03:31.477 09:41:25 -- common/autotest_common.sh@1104 -- # denied 00:03:31.477 09:41:25 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:31.477 09:41:25 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:31.477 09:41:25 -- setup/acl.sh@38 -- # setup output config 00:03:31.477 09:41:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.477 09:41:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:32.411 lsblk: /dev/nvme3c3n1: not a block device 00:03:32.669 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:32.669 09:41:26 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:32.669 09:41:26 -- setup/acl.sh@28 -- # local dev driver 00:03:32.669 09:41:26 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:32.669 09:41:26 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:32.669 09:41:26 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:32.669 09:41:26 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:32.669 09:41:26 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:32.669 09:41:26 -- setup/acl.sh@41 -- # setup reset 00:03:32.669 09:41:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.669 09:41:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.227 00:03:39.227 real 0m7.234s 00:03:39.227 user 0m0.885s 00:03:39.227 sys 0m1.422s 00:03:39.227 09:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.227 09:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:39.227 ************************************ 00:03:39.227 END TEST denied 00:03:39.227 ************************************ 00:03:39.227 09:41:32 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:39.227 09:41:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:39.227 09:41:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:39.227 09:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:39.227 ************************************ 00:03:39.227 START TEST allowed 00:03:39.227 ************************************ 00:03:39.227 09:41:32 -- common/autotest_common.sh@1104 -- # allowed 00:03:39.227 09:41:32 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:39.227 09:41:32 -- setup/acl.sh@45 -- # setup output config 00:03:39.227 09:41:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.227 09:41:32 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:39.227 09:41:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:39.486 lsblk: /dev/nvme1c1n1: not a block device 00:03:40.053 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.054 09:41:33 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:40.054 09:41:33 -- setup/acl.sh@28 -- # local dev driver 00:03:40.054 09:41:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:40.054 09:41:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:40.054 09:41:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:40.054 09:41:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:40.054 09:41:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:40.054 09:41:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:40.054 09:41:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:03:40.054 09:41:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:03:40.054 09:41:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:40.054 09:41:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:40.054 09:41:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:40.054 09:41:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:03:40.054 09:41:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:03:40.054 09:41:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:40.054 09:41:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:40.054 09:41:33 -- setup/acl.sh@48 -- # setup reset 00:03:40.054 09:41:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.054 09:41:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.990 00:03:40.990 real 0m2.378s 00:03:40.990 user 0m1.044s 00:03:40.990 sys 0m1.336s 00:03:40.990 09:41:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.990 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:03:40.990 ************************************ 00:03:40.990 END TEST allowed 00:03:40.990 ************************************ 00:03:41.250 00:03:41.250 real 0m11.553s 00:03:41.250 user 0m2.817s 00:03:41.250 sys 0m3.858s 00:03:41.250 09:41:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.250 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:03:41.250 ************************************ 00:03:41.250 END TEST acl 00:03:41.250 ************************************ 00:03:41.250 09:41:34 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:41.250 09:41:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.250 09:41:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.250 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:03:41.250 ************************************ 00:03:41.250 START TEST hugepages 00:03:41.250 ************************************ 00:03:41.250 09:41:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:41.250 * Looking for test storage... 00:03:41.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:41.250 09:41:34 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:41.250 09:41:34 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:41.250 09:41:34 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:41.250 09:41:34 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:41.250 09:41:34 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:41.250 09:41:34 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:41.250 09:41:34 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:41.250 09:41:34 -- setup/common.sh@18 -- # local node= 00:03:41.250 09:41:34 -- setup/common.sh@19 -- # local var val 00:03:41.250 09:41:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.250 09:41:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.250 09:41:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.250 09:41:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.250 09:41:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.250 09:41:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.250 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.250 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 5852460 kB' 'MemAvailable: 7389952 kB' 'Buffers: 2436 kB' 'Cached: 1751412 kB' 'SwapCached: 0 kB' 'Active: 444108 kB' 'Inactive: 1411336 kB' 'Active(anon): 112108 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 103196 kB' 'Mapped: 48640 kB' 'Shmem: 10512 kB' 'KReclaimable: 62640 kB' 'Slab: 134872 kB' 'SReclaimable: 62640 kB' 'SUnreclaim: 72232 kB' 'KernelStack: 6428 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 327052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.251 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.251 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # continue 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.252 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.252 09:41:34 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.252 09:41:34 -- setup/common.sh@33 -- # echo 2048 00:03:41.252 09:41:34 -- setup/common.sh@33 -- # return 0 00:03:41.252 09:41:34 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:41.252 09:41:34 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:41.252 09:41:34 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:41.252 09:41:34 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:41.252 09:41:34 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:41.252 09:41:34 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:41.252 09:41:34 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:41.252 09:41:34 -- setup/hugepages.sh@207 -- # get_nodes 00:03:41.252 09:41:34 -- setup/hugepages.sh@27 -- # local node 00:03:41.252 09:41:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.252 09:41:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:41.252 09:41:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:41.252 09:41:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.252 09:41:34 -- setup/hugepages.sh@208 -- # clear_hp 00:03:41.252 09:41:34 -- setup/hugepages.sh@37 -- # local node hp 00:03:41.252 09:41:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:41.252 09:41:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.252 09:41:34 -- setup/hugepages.sh@41 -- # echo 0 00:03:41.252 09:41:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.252 09:41:34 -- setup/hugepages.sh@41 -- # echo 0 00:03:41.252 09:41:34 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:41.252 09:41:34 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:41.252 09:41:34 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:41.252 09:41:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.252 09:41:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.252 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:03:41.252 ************************************ 00:03:41.252 START TEST default_setup 00:03:41.252 ************************************ 00:03:41.252 09:41:34 -- common/autotest_common.sh@1104 -- # default_setup 00:03:41.252 09:41:34 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:41.252 09:41:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.252 09:41:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:41.252 09:41:34 -- setup/hugepages.sh@51 -- # shift 00:03:41.252 09:41:34 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:41.252 09:41:34 -- setup/hugepages.sh@52 -- # local node_ids 00:03:41.252 09:41:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.252 09:41:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.252 09:41:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:41.252 09:41:34 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:41.252 09:41:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.252 09:41:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.252 09:41:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:41.252 09:41:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.252 09:41:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.252 09:41:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:41.252 09:41:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:41.252 09:41:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:41.252 09:41:34 -- setup/hugepages.sh@73 -- # return 0 00:03:41.252 09:41:34 -- setup/hugepages.sh@137 -- # setup output 00:03:41.252 09:41:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.252 09:41:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.189 lsblk: /dev/nvme1c1n1: not a block device 00:03:42.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.446 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.446 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.446 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.707 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.707 09:41:36 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:42.707 09:41:36 -- setup/hugepages.sh@89 -- # local node 00:03:42.707 09:41:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.707 09:41:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.707 09:41:36 -- setup/hugepages.sh@92 -- # local surp 00:03:42.707 09:41:36 -- setup/hugepages.sh@93 -- # local resv 00:03:42.707 09:41:36 -- setup/hugepages.sh@94 -- # local anon 00:03:42.707 09:41:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.707 09:41:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.707 09:41:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.707 09:41:36 -- setup/common.sh@18 -- # local node= 00:03:42.707 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:42.707 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.707 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.707 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.707 09:41:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.707 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.707 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7979916 kB' 'MemAvailable: 9517100 kB' 'Buffers: 2436 kB' 'Cached: 1751396 kB' 'SwapCached: 0 kB' 'Active: 461188 kB' 'Inactive: 1411344 kB' 'Active(anon): 129188 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411344 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120028 kB' 'Mapped: 48720 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133920 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71916 kB' 'KernelStack: 6416 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.707 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.708 09:41:36 -- setup/common.sh@33 -- # echo 0 00:03:42.708 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:42.708 09:41:36 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.708 09:41:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.708 09:41:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.708 09:41:36 -- setup/common.sh@18 -- # local node= 00:03:42.708 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:42.708 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.708 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.708 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.708 09:41:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.708 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.708 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7979668 kB' 'MemAvailable: 9516852 kB' 'Buffers: 2436 kB' 'Cached: 1751396 kB' 'SwapCached: 0 kB' 'Active: 460440 kB' 'Inactive: 1411344 kB' 'Active(anon): 128440 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411344 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119556 kB' 'Mapped: 48588 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133892 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71888 kB' 'KernelStack: 6432 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 09:41:36 -- setup/common.sh@33 -- # echo 0 00:03:42.709 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:42.709 09:41:36 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.709 09:41:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.709 09:41:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.709 09:41:36 -- setup/common.sh@18 -- # local node= 00:03:42.709 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:42.709 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.709 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.709 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.709 09:41:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.709 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.709 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7979420 kB' 'MemAvailable: 9516604 kB' 'Buffers: 2436 kB' 'Cached: 1751396 kB' 'SwapCached: 0 kB' 'Active: 460700 kB' 'Inactive: 1411344 kB' 'Active(anon): 128700 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411344 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119816 kB' 'Mapped: 48588 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133892 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71888 kB' 'KernelStack: 6432 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.709 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.710 09:41:36 -- setup/common.sh@33 -- # echo 0 00:03:42.710 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:42.710 nr_hugepages=1024 00:03:42.710 resv_hugepages=0 00:03:42.710 09:41:36 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.710 09:41:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.710 09:41:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.710 surplus_hugepages=0 00:03:42.710 anon_hugepages=0 00:03:42.710 09:41:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.710 09:41:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.710 09:41:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.710 09:41:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.710 09:41:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.710 09:41:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.710 09:41:36 -- setup/common.sh@18 -- # local node= 00:03:42.710 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:42.710 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.710 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.710 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.710 09:41:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.710 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.710 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7979420 kB' 'MemAvailable: 9516604 kB' 'Buffers: 2436 kB' 'Cached: 1751396 kB' 'SwapCached: 0 kB' 'Active: 460636 kB' 'Inactive: 1411344 kB' 'Active(anon): 128636 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411344 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 48588 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133892 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71888 kB' 'KernelStack: 6416 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.710 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.710 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.711 09:41:36 -- setup/common.sh@33 -- # echo 1024 00:03:42.711 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:42.711 09:41:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.711 09:41:36 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.711 09:41:36 -- setup/hugepages.sh@27 -- # local node 00:03:42.711 09:41:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.711 09:41:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.711 09:41:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:42.711 09:41:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.711 09:41:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.711 09:41:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.711 09:41:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.711 09:41:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.711 09:41:36 -- setup/common.sh@18 -- # local node=0 00:03:42.711 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:42.711 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.711 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.711 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.711 09:41:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.711 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.711 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7979420 kB' 'MemUsed: 4262544 kB' 'SwapCached: 0 kB' 'Active: 460636 kB' 'Inactive: 1411344 kB' 'Active(anon): 128636 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411344 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1753832 kB' 'Mapped: 48588 kB' 'AnonPages: 119752 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62004 kB' 'Slab: 133892 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # continue 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.711 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.711 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.711 09:41:36 -- setup/common.sh@33 -- # echo 0 00:03:42.711 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:42.711 09:41:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.711 node0=1024 expecting 1024 00:03:42.711 09:41:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.711 09:41:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.711 09:41:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.711 09:41:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.711 09:41:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.711 00:03:42.711 real 0m1.503s 00:03:42.711 user 0m0.643s 00:03:42.711 sys 0m0.815s 00:03:42.711 09:41:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.711 09:41:36 -- common/autotest_common.sh@10 -- # set +x 00:03:42.711 ************************************ 00:03:42.711 END TEST default_setup 00:03:42.711 ************************************ 00:03:42.969 09:41:36 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:42.969 09:41:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.969 09:41:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.969 09:41:36 -- common/autotest_common.sh@10 -- # set +x 00:03:42.969 ************************************ 00:03:42.969 START TEST per_node_1G_alloc 00:03:42.969 ************************************ 00:03:42.969 09:41:36 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:42.969 09:41:36 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:42.969 09:41:36 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:42.969 09:41:36 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:42.969 09:41:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.969 09:41:36 -- setup/hugepages.sh@51 -- # shift 00:03:42.969 09:41:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.969 09:41:36 -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.969 09:41:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.969 09:41:36 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:42.969 09:41:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.969 09:41:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.969 09:41:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.969 09:41:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.969 09:41:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:42.969 09:41:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.969 09:41:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.969 09:41:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.969 09:41:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.969 09:41:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:42.969 09:41:36 -- setup/hugepages.sh@73 -- # return 0 00:03:42.969 09:41:36 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:42.969 09:41:36 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:42.969 09:41:36 -- setup/hugepages.sh@146 -- # setup output 00:03:42.969 09:41:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.969 09:41:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.488 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.488 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.488 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.488 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.488 09:41:37 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:43.488 09:41:37 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:43.488 09:41:37 -- setup/hugepages.sh@89 -- # local node 00:03:43.488 09:41:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.488 09:41:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.488 09:41:37 -- setup/hugepages.sh@92 -- # local surp 00:03:43.488 09:41:37 -- setup/hugepages.sh@93 -- # local resv 00:03:43.488 09:41:37 -- setup/hugepages.sh@94 -- # local anon 00:03:43.488 09:41:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.488 09:41:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.488 09:41:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.488 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:43.488 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:43.488 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.488 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.488 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.488 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.488 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.488 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9023640 kB' 'MemAvailable: 10560840 kB' 'Buffers: 2436 kB' 'Cached: 1751396 kB' 'SwapCached: 0 kB' 'Active: 461916 kB' 'Inactive: 1411360 kB' 'Active(anon): 129916 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120700 kB' 'Mapped: 48804 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133936 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71932 kB' 'KernelStack: 6540 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.489 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:43.489 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:43.489 09:41:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:43.489 09:41:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.489 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.489 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:43.489 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:43.489 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.489 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.489 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.489 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.489 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.489 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9023640 kB' 'MemAvailable: 10560840 kB' 'Buffers: 2436 kB' 'Cached: 1751396 kB' 'SwapCached: 0 kB' 'Active: 460992 kB' 'Inactive: 1411360 kB' 'Active(anon): 128992 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120032 kB' 'Mapped: 48616 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 134000 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71996 kB' 'KernelStack: 6480 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.489 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.490 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.491 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:43.491 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:43.491 09:41:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:43.491 09:41:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.491 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.491 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:43.491 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:43.491 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.491 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.491 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.491 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.491 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.491 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9023640 kB' 'MemAvailable: 10560840 kB' 'Buffers: 2436 kB' 'Cached: 1751396 kB' 'SwapCached: 0 kB' 'Active: 460916 kB' 'Inactive: 1411360 kB' 'Active(anon): 128916 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119988 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133996 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6480 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.491 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.492 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:43.492 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:43.492 nr_hugepages=512 00:03:43.492 resv_hugepages=0 00:03:43.492 surplus_hugepages=0 00:03:43.492 anon_hugepages=0 00:03:43.492 09:41:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:43.492 09:41:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:43.492 09:41:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.492 09:41:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.492 09:41:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.492 09:41:37 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.492 09:41:37 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:43.492 09:41:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.492 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.492 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:43.492 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:43.492 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.492 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.492 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.492 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.492 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.492 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.492 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9023388 kB' 'MemAvailable: 10560588 kB' 'Buffers: 2436 kB' 'Cached: 1751396 kB' 'SwapCached: 0 kB' 'Active: 460772 kB' 'Inactive: 1411360 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120088 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133996 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6480 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.493 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.493 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.494 09:41:37 -- setup/common.sh@33 -- # echo 512 00:03:43.494 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:43.494 09:41:37 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.494 09:41:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.494 09:41:37 -- setup/hugepages.sh@27 -- # local node 00:03:43.494 09:41:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.494 09:41:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.494 09:41:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.494 09:41:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.494 09:41:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.494 09:41:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.494 09:41:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.494 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.494 09:41:37 -- setup/common.sh@18 -- # local node=0 00:03:43.494 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:43.494 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.494 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.494 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.494 09:41:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.494 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.494 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9023388 kB' 'MemUsed: 3218576 kB' 'SwapCached: 0 kB' 'Active: 460892 kB' 'Inactive: 1411360 kB' 'Active(anon): 128892 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1753832 kB' 'Mapped: 48612 kB' 'AnonPages: 120004 kB' 'Shmem: 10472 kB' 'KernelStack: 6512 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62004 kB' 'Slab: 133996 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # continue 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.495 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.495 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.495 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:43.495 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:43.495 09:41:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.495 09:41:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.495 09:41:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.495 09:41:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.495 node0=512 expecting 512 00:03:43.495 ************************************ 00:03:43.495 END TEST per_node_1G_alloc 00:03:43.495 ************************************ 00:03:43.495 09:41:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:43.495 09:41:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:43.495 00:03:43.495 real 0m0.716s 00:03:43.495 user 0m0.343s 00:03:43.495 sys 0m0.388s 00:03:43.495 09:41:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.495 09:41:37 -- common/autotest_common.sh@10 -- # set +x 00:03:43.754 09:41:37 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:43.754 09:41:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.754 09:41:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.754 09:41:37 -- common/autotest_common.sh@10 -- # set +x 00:03:43.754 ************************************ 00:03:43.754 START TEST even_2G_alloc 00:03:43.754 ************************************ 00:03:43.754 09:41:37 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:43.754 09:41:37 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:43.754 09:41:37 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.754 09:41:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.754 09:41:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.754 09:41:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.754 09:41:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.754 09:41:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.754 09:41:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.754 09:41:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.754 09:41:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.754 09:41:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.754 09:41:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.754 09:41:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.754 09:41:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.754 09:41:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.754 09:41:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:43.754 09:41:37 -- setup/hugepages.sh@83 -- # : 0 00:03:43.754 09:41:37 -- setup/hugepages.sh@84 -- # : 0 00:03:43.754 09:41:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.754 09:41:37 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:43.754 09:41:37 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:43.754 09:41:37 -- setup/hugepages.sh@153 -- # setup output 00:03:43.754 09:41:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.754 09:41:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.013 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.277 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.277 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.277 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.277 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.277 09:41:37 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:44.277 09:41:37 -- setup/hugepages.sh@89 -- # local node 00:03:44.277 09:41:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.277 09:41:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.277 09:41:37 -- setup/hugepages.sh@92 -- # local surp 00:03:44.277 09:41:37 -- setup/hugepages.sh@93 -- # local resv 00:03:44.277 09:41:37 -- setup/hugepages.sh@94 -- # local anon 00:03:44.277 09:41:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.277 09:41:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.277 09:41:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.277 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:44.277 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:44.277 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.277 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.277 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.277 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.277 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.277 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.277 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.277 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7973476 kB' 'MemAvailable: 9510676 kB' 'Buffers: 2436 kB' 'Cached: 1751396 kB' 'SwapCached: 0 kB' 'Active: 460820 kB' 'Inactive: 1411360 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119992 kB' 'Mapped: 48780 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 134036 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 72032 kB' 'KernelStack: 6444 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:44.277 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.277 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.277 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.277 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.277 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.277 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.277 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.277 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.277 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.277 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.278 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.278 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.278 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:44.278 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:44.278 09:41:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.278 09:41:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.279 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.279 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:44.279 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:44.279 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.279 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.279 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.279 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.279 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.279 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7973476 kB' 'MemAvailable: 9510684 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460684 kB' 'Inactive: 1411360 kB' 'Active(anon): 128684 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120064 kB' 'Mapped: 48592 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134072 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72052 kB' 'KernelStack: 6384 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.279 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.279 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.280 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:44.280 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:44.280 09:41:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.280 09:41:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.280 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.280 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:44.280 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:44.280 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.280 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.280 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.280 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.280 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.280 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7973476 kB' 'MemAvailable: 9510684 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460620 kB' 'Inactive: 1411360 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119764 kB' 'Mapped: 48592 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134060 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72040 kB' 'KernelStack: 6416 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.280 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.280 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.281 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.281 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.282 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:44.282 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:44.282 nr_hugepages=1024 00:03:44.282 resv_hugepages=0 00:03:44.282 surplus_hugepages=0 00:03:44.282 anon_hugepages=0 00:03:44.282 09:41:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.282 09:41:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.282 09:41:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.282 09:41:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.282 09:41:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.282 09:41:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.282 09:41:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.282 09:41:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.282 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.282 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:44.282 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:44.282 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.282 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.282 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.282 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.282 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.282 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7973476 kB' 'MemAvailable: 9510684 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460612 kB' 'Inactive: 1411360 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 48592 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134056 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72036 kB' 'KernelStack: 6416 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.282 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.282 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.283 09:41:37 -- setup/common.sh@33 -- # echo 1024 00:03:44.283 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:44.283 09:41:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.283 09:41:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.283 09:41:37 -- setup/hugepages.sh@27 -- # local node 00:03:44.283 09:41:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.283 09:41:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.283 09:41:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.283 09:41:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.283 09:41:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.283 09:41:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.283 09:41:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.283 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.283 09:41:37 -- setup/common.sh@18 -- # local node=0 00:03:44.283 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:44.283 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.283 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.283 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.283 09:41:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.283 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.283 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7973476 kB' 'MemUsed: 4268488 kB' 'SwapCached: 0 kB' 'Active: 460728 kB' 'Inactive: 1411360 kB' 'Active(anon): 128728 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1753836 kB' 'Mapped: 48592 kB' 'AnonPages: 119876 kB' 'Shmem: 10472 kB' 'KernelStack: 6432 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62020 kB' 'Slab: 134056 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.283 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.283 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # continue 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.284 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.284 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.284 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:44.284 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:44.284 09:41:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.284 node0=1024 expecting 1024 00:03:44.284 09:41:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.284 09:41:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.284 09:41:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.284 09:41:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.284 09:41:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.284 00:03:44.284 real 0m0.710s 00:03:44.284 user 0m0.341s 00:03:44.284 sys 0m0.391s 00:03:44.284 09:41:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.284 09:41:37 -- common/autotest_common.sh@10 -- # set +x 00:03:44.284 ************************************ 00:03:44.284 END TEST even_2G_alloc 00:03:44.284 ************************************ 00:03:44.284 09:41:38 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:44.284 09:41:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.284 09:41:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.284 09:41:38 -- common/autotest_common.sh@10 -- # set +x 00:03:44.543 ************************************ 00:03:44.543 START TEST odd_alloc 00:03:44.543 ************************************ 00:03:44.543 09:41:38 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:44.543 09:41:38 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:44.543 09:41:38 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:44.543 09:41:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.543 09:41:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.543 09:41:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:44.543 09:41:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.543 09:41:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.543 09:41:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.543 09:41:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:44.543 09:41:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.543 09:41:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.543 09:41:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.543 09:41:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.543 09:41:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.543 09:41:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.543 09:41:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:44.543 09:41:38 -- setup/hugepages.sh@83 -- # : 0 00:03:44.544 09:41:38 -- setup/hugepages.sh@84 -- # : 0 00:03:44.544 09:41:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.544 09:41:38 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:44.544 09:41:38 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:44.544 09:41:38 -- setup/hugepages.sh@160 -- # setup output 00:03:44.544 09:41:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.544 09:41:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.802 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.802 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.802 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.802 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.065 09:41:38 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:45.065 09:41:38 -- setup/hugepages.sh@89 -- # local node 00:03:45.065 09:41:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.065 09:41:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.065 09:41:38 -- setup/hugepages.sh@92 -- # local surp 00:03:45.065 09:41:38 -- setup/hugepages.sh@93 -- # local resv 00:03:45.065 09:41:38 -- setup/hugepages.sh@94 -- # local anon 00:03:45.065 09:41:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.065 09:41:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.065 09:41:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.065 09:41:38 -- setup/common.sh@18 -- # local node= 00:03:45.065 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:45.065 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.065 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.065 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.065 09:41:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.065 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.065 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.065 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7960136 kB' 'MemAvailable: 9497348 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460868 kB' 'Inactive: 1411364 kB' 'Active(anon): 128868 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120224 kB' 'Mapped: 48928 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134068 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72048 kB' 'KernelStack: 6492 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:45.065 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.065 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.065 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.065 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.065 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.066 09:41:38 -- setup/common.sh@33 -- # echo 0 00:03:45.066 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:45.066 09:41:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:45.066 09:41:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.066 09:41:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.066 09:41:38 -- setup/common.sh@18 -- # local node= 00:03:45.066 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:45.066 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.066 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.066 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.066 09:41:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.067 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.067 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7960136 kB' 'MemAvailable: 9497348 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460520 kB' 'Inactive: 1411364 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119892 kB' 'Mapped: 48708 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134072 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72052 kB' 'KernelStack: 6428 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.067 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.067 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.068 09:41:38 -- setup/common.sh@33 -- # echo 0 00:03:45.068 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:45.068 09:41:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:45.068 09:41:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.068 09:41:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.068 09:41:38 -- setup/common.sh@18 -- # local node= 00:03:45.068 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:45.068 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.068 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.068 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.068 09:41:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.068 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.068 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7960136 kB' 'MemAvailable: 9497348 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460644 kB' 'Inactive: 1411364 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 48592 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134104 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72084 kB' 'KernelStack: 6432 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.068 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.068 09:41:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.069 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.069 09:41:38 -- setup/common.sh@33 -- # echo 0 00:03:45.069 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:45.069 09:41:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:45.069 nr_hugepages=1025 00:03:45.069 09:41:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:45.069 resv_hugepages=0 00:03:45.069 surplus_hugepages=0 00:03:45.069 anon_hugepages=0 00:03:45.069 09:41:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.069 09:41:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.069 09:41:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.069 09:41:38 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.069 09:41:38 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:45.069 09:41:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.069 09:41:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.069 09:41:38 -- setup/common.sh@18 -- # local node= 00:03:45.069 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:45.069 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.069 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.069 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.069 09:41:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.069 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.069 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.069 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7960136 kB' 'MemAvailable: 9497348 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460732 kB' 'Inactive: 1411364 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119864 kB' 'Mapped: 48592 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134100 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72080 kB' 'KernelStack: 6416 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.070 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.070 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.071 09:41:38 -- setup/common.sh@33 -- # echo 1025 00:03:45.071 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:45.071 09:41:38 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.071 09:41:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.071 09:41:38 -- setup/hugepages.sh@27 -- # local node 00:03:45.071 09:41:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.071 09:41:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:45.071 09:41:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.071 09:41:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.071 09:41:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.071 09:41:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.071 09:41:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.071 09:41:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.071 09:41:38 -- setup/common.sh@18 -- # local node=0 00:03:45.071 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:45.071 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.071 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.071 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.071 09:41:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.071 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.071 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7960136 kB' 'MemUsed: 4281828 kB' 'SwapCached: 0 kB' 'Active: 460744 kB' 'Inactive: 1411364 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1753836 kB' 'Mapped: 48592 kB' 'AnonPages: 120132 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62020 kB' 'Slab: 134100 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.071 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.071 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # continue 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.072 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.072 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.072 09:41:38 -- setup/common.sh@33 -- # echo 0 00:03:45.072 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:45.072 09:41:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.072 node0=1025 expecting 1025 00:03:45.072 ************************************ 00:03:45.072 END TEST odd_alloc 00:03:45.072 ************************************ 00:03:45.072 09:41:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.072 09:41:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.072 09:41:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.072 09:41:38 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:45.072 09:41:38 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:45.072 00:03:45.072 real 0m0.707s 00:03:45.072 user 0m0.318s 00:03:45.072 sys 0m0.414s 00:03:45.072 09:41:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.072 09:41:38 -- common/autotest_common.sh@10 -- # set +x 00:03:45.072 09:41:38 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:45.072 09:41:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.072 09:41:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.072 09:41:38 -- common/autotest_common.sh@10 -- # set +x 00:03:45.072 ************************************ 00:03:45.072 START TEST custom_alloc 00:03:45.072 ************************************ 00:03:45.072 09:41:38 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:45.072 09:41:38 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:45.072 09:41:38 -- setup/hugepages.sh@169 -- # local node 00:03:45.072 09:41:38 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:45.072 09:41:38 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:45.072 09:41:38 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:45.072 09:41:38 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:45.072 09:41:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.072 09:41:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.072 09:41:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.072 09:41:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.072 09:41:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.072 09:41:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.072 09:41:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.072 09:41:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.072 09:41:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.072 09:41:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.072 09:41:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.072 09:41:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.072 09:41:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.072 09:41:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.072 09:41:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.072 09:41:38 -- setup/hugepages.sh@83 -- # : 0 00:03:45.072 09:41:38 -- setup/hugepages.sh@84 -- # : 0 00:03:45.072 09:41:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.072 09:41:38 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:45.072 09:41:38 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:45.072 09:41:38 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.072 09:41:38 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.073 09:41:38 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.073 09:41:38 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:45.073 09:41:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.073 09:41:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.073 09:41:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.073 09:41:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.073 09:41:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.073 09:41:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.073 09:41:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.073 09:41:38 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:45.073 09:41:38 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.073 09:41:38 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.073 09:41:38 -- setup/hugepages.sh@78 -- # return 0 00:03:45.073 09:41:38 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:45.073 09:41:38 -- setup/hugepages.sh@187 -- # setup output 00:03:45.073 09:41:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.073 09:41:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.641 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.641 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.641 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.641 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.641 09:41:39 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:45.641 09:41:39 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:45.641 09:41:39 -- setup/hugepages.sh@89 -- # local node 00:03:45.641 09:41:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.641 09:41:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.641 09:41:39 -- setup/hugepages.sh@92 -- # local surp 00:03:45.641 09:41:39 -- setup/hugepages.sh@93 -- # local resv 00:03:45.641 09:41:39 -- setup/hugepages.sh@94 -- # local anon 00:03:45.641 09:41:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.641 09:41:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.641 09:41:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.641 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:45.641 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:45.641 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.641 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.641 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.641 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.641 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.641 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8999992 kB' 'MemAvailable: 10537204 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460980 kB' 'Inactive: 1411364 kB' 'Active(anon): 128980 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120052 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134160 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72140 kB' 'KernelStack: 6448 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.642 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:45.642 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:45.642 09:41:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:45.642 09:41:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.642 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.642 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:45.642 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:45.642 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.642 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.642 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.642 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.642 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.642 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9000416 kB' 'MemAvailable: 10537628 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460732 kB' 'Inactive: 1411364 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119800 kB' 'Mapped: 48588 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134148 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72128 kB' 'KernelStack: 6400 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.642 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.642 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:45.642 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:45.642 09:41:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:45.642 09:41:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.642 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.642 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:45.642 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:45.642 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.642 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.642 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.642 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.642 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.642 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.642 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9000416 kB' 'MemAvailable: 10537628 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460512 kB' 'Inactive: 1411364 kB' 'Active(anon): 128512 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119904 kB' 'Mapped: 48588 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134148 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72128 kB' 'KernelStack: 6432 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.643 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.643 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.904 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:45.905 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:45.905 nr_hugepages=512 00:03:45.905 resv_hugepages=0 00:03:45.905 09:41:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:45.905 09:41:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:45.905 09:41:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.905 surplus_hugepages=0 00:03:45.905 09:41:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.905 anon_hugepages=0 00:03:45.905 09:41:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.905 09:41:39 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.905 09:41:39 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:45.905 09:41:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.905 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.905 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:45.905 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:45.905 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.905 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.905 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.905 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.905 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.905 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9000416 kB' 'MemAvailable: 10537628 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 460792 kB' 'Inactive: 1411364 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119804 kB' 'Mapped: 48588 kB' 'Shmem: 10472 kB' 'KReclaimable: 62020 kB' 'Slab: 134148 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72128 kB' 'KernelStack: 6416 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.905 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.905 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.906 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.906 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 09:41:39 -- setup/common.sh@33 -- # echo 512 00:03:45.907 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:45.907 09:41:39 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.907 09:41:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.907 09:41:39 -- setup/hugepages.sh@27 -- # local node 00:03:45.907 09:41:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.907 09:41:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.907 09:41:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.907 09:41:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.907 09:41:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.907 09:41:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.907 09:41:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.907 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.907 09:41:39 -- setup/common.sh@18 -- # local node=0 00:03:45.907 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:45.907 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.907 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.907 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.907 09:41:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.907 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.907 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9000416 kB' 'MemUsed: 3241548 kB' 'SwapCached: 0 kB' 'Active: 460448 kB' 'Inactive: 1411364 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1753836 kB' 'Mapped: 48588 kB' 'AnonPages: 119808 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62020 kB' 'Slab: 134148 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 72128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.907 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # continue 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.908 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:45.908 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:45.908 node0=512 expecting 512 00:03:45.908 09:41:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.908 09:41:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.908 09:41:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.908 09:41:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.908 09:41:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:45.908 09:41:39 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:45.908 00:03:45.908 real 0m0.692s 00:03:45.908 user 0m0.324s 00:03:45.908 sys 0m0.389s 00:03:45.908 ************************************ 00:03:45.908 END TEST custom_alloc 00:03:45.908 ************************************ 00:03:45.908 09:41:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.908 09:41:39 -- common/autotest_common.sh@10 -- # set +x 00:03:45.908 09:41:39 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:45.908 09:41:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.908 09:41:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.908 09:41:39 -- common/autotest_common.sh@10 -- # set +x 00:03:45.908 ************************************ 00:03:45.908 START TEST no_shrink_alloc 00:03:45.908 ************************************ 00:03:45.908 09:41:39 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:45.908 09:41:39 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:45.908 09:41:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.908 09:41:39 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.908 09:41:39 -- setup/hugepages.sh@51 -- # shift 00:03:45.908 09:41:39 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.908 09:41:39 -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.908 09:41:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.908 09:41:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.908 09:41:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.908 09:41:39 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.908 09:41:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.908 09:41:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.908 09:41:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.908 09:41:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.908 09:41:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.908 09:41:39 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.908 09:41:39 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.908 09:41:39 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.908 09:41:39 -- setup/hugepages.sh@73 -- # return 0 00:03:45.908 09:41:39 -- setup/hugepages.sh@198 -- # setup output 00:03:45.908 09:41:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.908 09:41:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.480 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.480 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.480 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.480 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.480 09:41:40 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:46.480 09:41:40 -- setup/hugepages.sh@89 -- # local node 00:03:46.480 09:41:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.480 09:41:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.480 09:41:40 -- setup/hugepages.sh@92 -- # local surp 00:03:46.480 09:41:40 -- setup/hugepages.sh@93 -- # local resv 00:03:46.480 09:41:40 -- setup/hugepages.sh@94 -- # local anon 00:03:46.480 09:41:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.480 09:41:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.480 09:41:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.480 09:41:40 -- setup/common.sh@18 -- # local node= 00:03:46.480 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:46.480 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.480 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.480 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.480 09:41:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.480 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.480 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7950256 kB' 'MemAvailable: 9487460 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 458004 kB' 'Inactive: 1411364 kB' 'Active(anon): 126004 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117132 kB' 'Mapped: 48096 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 134004 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 72000 kB' 'KernelStack: 6436 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.480 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.481 09:41:40 -- setup/common.sh@33 -- # echo 0 00:03:46.481 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:46.481 09:41:40 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.481 09:41:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.481 09:41:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.481 09:41:40 -- setup/common.sh@18 -- # local node= 00:03:46.481 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:46.481 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.481 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.481 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.481 09:41:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.481 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.481 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7950256 kB' 'MemAvailable: 9487460 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 457252 kB' 'Inactive: 1411364 kB' 'Active(anon): 125252 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116616 kB' 'Mapped: 47876 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 134000 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71996 kB' 'KernelStack: 6352 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.482 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.482 09:41:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.483 09:41:40 -- setup/common.sh@33 -- # echo 0 00:03:46.483 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:46.483 09:41:40 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.483 09:41:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.483 09:41:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.483 09:41:40 -- setup/common.sh@18 -- # local node= 00:03:46.483 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:46.483 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.483 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.483 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.483 09:41:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.483 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.483 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7950624 kB' 'MemAvailable: 9487828 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 457340 kB' 'Inactive: 1411364 kB' 'Active(anon): 125340 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116456 kB' 'Mapped: 47848 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 134000 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71996 kB' 'KernelStack: 6352 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.483 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.483 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 09:41:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.485 09:41:40 -- setup/common.sh@33 -- # echo 0 00:03:46.485 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:46.485 09:41:40 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.485 nr_hugepages=1024 00:03:46.485 09:41:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.485 resv_hugepages=0 00:03:46.485 09:41:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.485 09:41:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.485 surplus_hugepages=0 00:03:46.485 anon_hugepages=0 00:03:46.485 09:41:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.485 09:41:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.485 09:41:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.485 09:41:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.485 09:41:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.485 09:41:40 -- setup/common.sh@18 -- # local node= 00:03:46.485 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:46.485 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.485 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.485 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.485 09:41:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.485 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.485 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.485 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7950884 kB' 'MemAvailable: 9488088 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 457312 kB' 'Inactive: 1411364 kB' 'Active(anon): 125312 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116724 kB' 'Mapped: 47848 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 134000 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71996 kB' 'KernelStack: 6352 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.486 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.486 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.487 09:41:40 -- setup/common.sh@33 -- # echo 1024 00:03:46.487 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:46.487 09:41:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.487 09:41:40 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.487 09:41:40 -- setup/hugepages.sh@27 -- # local node 00:03:46.487 09:41:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.487 09:41:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.487 09:41:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.487 09:41:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.487 09:41:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.487 09:41:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.487 09:41:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.487 09:41:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.487 09:41:40 -- setup/common.sh@18 -- # local node=0 00:03:46.487 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:46.487 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.487 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.487 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.487 09:41:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.487 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.487 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7950964 kB' 'MemUsed: 4291000 kB' 'SwapCached: 0 kB' 'Active: 457348 kB' 'Inactive: 1411364 kB' 'Active(anon): 125348 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1753836 kB' 'Mapped: 47848 kB' 'AnonPages: 116716 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62004 kB' 'Slab: 134000 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # continue 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.488 09:41:40 -- setup/common.sh@33 -- # echo 0 00:03:46.488 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:46.488 09:41:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.488 09:41:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.488 09:41:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.488 09:41:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.488 node0=1024 expecting 1024 00:03:46.488 09:41:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.488 09:41:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.488 09:41:40 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:46.488 09:41:40 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:46.488 09:41:40 -- setup/hugepages.sh@202 -- # setup output 00:03:46.488 09:41:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.488 09:41:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.057 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.057 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.057 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.057 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.057 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:47.057 09:41:40 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:47.057 09:41:40 -- setup/hugepages.sh@89 -- # local node 00:03:47.057 09:41:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.057 09:41:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.057 09:41:40 -- setup/hugepages.sh@92 -- # local surp 00:03:47.057 09:41:40 -- setup/hugepages.sh@93 -- # local resv 00:03:47.057 09:41:40 -- setup/hugepages.sh@94 -- # local anon 00:03:47.057 09:41:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.057 09:41:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.057 09:41:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.057 09:41:40 -- setup/common.sh@18 -- # local node= 00:03:47.057 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:47.057 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.057 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.057 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.057 09:41:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.057 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.057 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7955260 kB' 'MemAvailable: 9492464 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 458212 kB' 'Inactive: 1411364 kB' 'Active(anon): 126212 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117356 kB' 'Mapped: 48328 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133988 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71984 kB' 'KernelStack: 6460 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.057 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.058 09:41:40 -- setup/common.sh@33 -- # echo 0 00:03:47.058 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:47.058 09:41:40 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.058 09:41:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.058 09:41:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.058 09:41:40 -- setup/common.sh@18 -- # local node= 00:03:47.058 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:47.058 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.058 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.058 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.058 09:41:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.058 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.058 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7955260 kB' 'MemAvailable: 9492464 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 457676 kB' 'Inactive: 1411364 kB' 'Active(anon): 125676 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117036 kB' 'Mapped: 47976 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133988 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71984 kB' 'KernelStack: 6332 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.058 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.059 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 09:41:40 -- setup/common.sh@33 -- # echo 0 00:03:47.060 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:47.060 09:41:40 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.060 09:41:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.060 09:41:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.060 09:41:40 -- setup/common.sh@18 -- # local node= 00:03:47.060 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:47.060 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.060 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.060 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.060 09:41:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.060 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.060 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7955008 kB' 'MemAvailable: 9492212 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 457368 kB' 'Inactive: 1411364 kB' 'Active(anon): 125368 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116728 kB' 'Mapped: 47848 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133968 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71964 kB' 'KernelStack: 6352 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.060 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.060 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.061 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.061 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.061 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.061 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.061 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.061 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.061 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 09:41:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.323 09:41:40 -- setup/common.sh@33 -- # echo 0 00:03:47.323 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:47.323 09:41:40 -- setup/hugepages.sh@100 -- # resv=0 00:03:47.323 nr_hugepages=1024 00:03:47.323 09:41:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.323 resv_hugepages=0 00:03:47.323 09:41:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.323 surplus_hugepages=0 00:03:47.323 09:41:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.323 anon_hugepages=0 00:03:47.323 09:41:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.323 09:41:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.323 09:41:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.323 09:41:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.323 09:41:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.323 09:41:40 -- setup/common.sh@18 -- # local node= 00:03:47.323 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:47.323 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.323 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.323 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.323 09:41:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.323 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.323 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7955008 kB' 'MemAvailable: 9492212 kB' 'Buffers: 2436 kB' 'Cached: 1751400 kB' 'SwapCached: 0 kB' 'Active: 457344 kB' 'Inactive: 1411364 kB' 'Active(anon): 125344 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116692 kB' 'Mapped: 47848 kB' 'Shmem: 10472 kB' 'KReclaimable: 62004 kB' 'Slab: 133960 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71956 kB' 'KernelStack: 6336 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.323 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.323 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.324 09:41:40 -- setup/common.sh@33 -- # echo 1024 00:03:47.324 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:47.324 09:41:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.324 09:41:40 -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.324 09:41:40 -- setup/hugepages.sh@27 -- # local node 00:03:47.324 09:41:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.324 09:41:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.324 09:41:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.324 09:41:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.324 09:41:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.324 09:41:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.324 09:41:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.324 09:41:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.324 09:41:40 -- setup/common.sh@18 -- # local node=0 00:03:47.324 09:41:40 -- setup/common.sh@19 -- # local var val 00:03:47.324 09:41:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.324 09:41:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.324 09:41:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.324 09:41:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.324 09:41:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.324 09:41:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7954756 kB' 'MemUsed: 4287208 kB' 'SwapCached: 0 kB' 'Active: 457380 kB' 'Inactive: 1411364 kB' 'Active(anon): 125380 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1411364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1753836 kB' 'Mapped: 47848 kB' 'AnonPages: 116764 kB' 'Shmem: 10472 kB' 'KernelStack: 6368 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62004 kB' 'Slab: 133952 kB' 'SReclaimable: 62004 kB' 'SUnreclaim: 71948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # continue 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 09:41:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 09:41:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 09:41:40 -- setup/common.sh@33 -- # echo 0 00:03:47.324 09:41:40 -- setup/common.sh@33 -- # return 0 00:03:47.324 09:41:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.324 09:41:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.324 09:41:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.324 09:41:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.324 09:41:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.324 node0=1024 expecting 1024 00:03:47.324 09:41:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.324 00:03:47.324 real 0m1.356s 00:03:47.324 user 0m0.640s 00:03:47.324 sys 0m0.789s 00:03:47.324 09:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.324 09:41:40 -- common/autotest_common.sh@10 -- # set +x 00:03:47.324 ************************************ 00:03:47.324 END TEST no_shrink_alloc 00:03:47.324 ************************************ 00:03:47.324 09:41:40 -- setup/hugepages.sh@217 -- # clear_hp 00:03:47.324 09:41:40 -- setup/hugepages.sh@37 -- # local node hp 00:03:47.324 09:41:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.324 09:41:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.324 09:41:40 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.324 09:41:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.324 09:41:40 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.324 09:41:40 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:47.324 09:41:40 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:47.324 00:03:47.324 real 0m6.113s 00:03:47.324 user 0m2.748s 00:03:47.324 sys 0m3.444s 00:03:47.324 09:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.324 ************************************ 00:03:47.324 END TEST hugepages 00:03:47.324 ************************************ 00:03:47.324 09:41:40 -- common/autotest_common.sh@10 -- # set +x 00:03:47.324 09:41:40 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:47.324 09:41:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.324 09:41:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.324 09:41:40 -- common/autotest_common.sh@10 -- # set +x 00:03:47.324 ************************************ 00:03:47.324 START TEST driver 00:03:47.324 ************************************ 00:03:47.324 09:41:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:47.324 * Looking for test storage... 00:03:47.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:47.324 09:41:41 -- setup/driver.sh@68 -- # setup reset 00:03:47.324 09:41:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.324 09:41:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.894 09:41:46 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:53.894 09:41:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.894 09:41:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.894 09:41:46 -- common/autotest_common.sh@10 -- # set +x 00:03:53.894 ************************************ 00:03:53.894 START TEST guess_driver 00:03:53.894 ************************************ 00:03:53.894 09:41:46 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:53.894 09:41:46 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:53.894 09:41:46 -- setup/driver.sh@47 -- # local fail=0 00:03:53.894 09:41:46 -- setup/driver.sh@49 -- # pick_driver 00:03:53.894 09:41:46 -- setup/driver.sh@36 -- # vfio 00:03:53.894 09:41:46 -- setup/driver.sh@21 -- # local iommu_grups 00:03:53.894 09:41:46 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:53.894 09:41:46 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:53.894 09:41:46 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:53.894 09:41:46 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:53.894 09:41:46 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:53.894 09:41:46 -- setup/driver.sh@32 -- # return 1 00:03:53.894 09:41:46 -- setup/driver.sh@38 -- # uio 00:03:53.894 09:41:46 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:53.894 09:41:46 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:53.894 09:41:47 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:53.894 09:41:47 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:53.894 09:41:47 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:53.894 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:53.894 09:41:47 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:53.894 09:41:47 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:53.894 09:41:47 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:53.894 Looking for driver=uio_pci_generic 00:03:53.894 09:41:47 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:53.894 09:41:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.894 09:41:47 -- setup/driver.sh@45 -- # setup output config 00:03:53.894 09:41:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.894 09:41:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.153 lsblk: /dev/nvme0c0n1: not a block device 00:03:54.412 09:41:48 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:54.412 09:41:48 -- setup/driver.sh@58 -- # continue 00:03:54.412 09:41:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.671 09:41:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.671 09:41:48 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:54.671 09:41:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.671 09:41:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.671 09:41:48 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:54.671 09:41:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.671 09:41:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.671 09:41:48 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:54.671 09:41:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.671 09:41:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.671 09:41:48 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:54.671 09:41:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.671 09:41:48 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:54.671 09:41:48 -- setup/driver.sh@65 -- # setup reset 00:03:54.671 09:41:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.671 09:41:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.238 00:04:01.238 real 0m7.252s 00:04:01.238 user 0m0.944s 00:04:01.238 sys 0m1.470s 00:04:01.238 ************************************ 00:04:01.238 END TEST guess_driver 00:04:01.238 ************************************ 00:04:01.238 09:41:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.238 09:41:54 -- common/autotest_common.sh@10 -- # set +x 00:04:01.238 00:04:01.238 real 0m13.297s 00:04:01.238 user 0m1.297s 00:04:01.238 sys 0m2.307s 00:04:01.238 09:41:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.238 09:41:54 -- common/autotest_common.sh@10 -- # set +x 00:04:01.238 ************************************ 00:04:01.238 END TEST driver 00:04:01.238 ************************************ 00:04:01.238 09:41:54 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:01.238 09:41:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.238 09:41:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.238 09:41:54 -- common/autotest_common.sh@10 -- # set +x 00:04:01.238 ************************************ 00:04:01.238 START TEST devices 00:04:01.238 ************************************ 00:04:01.238 09:41:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:01.238 * Looking for test storage... 00:04:01.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:01.238 09:41:54 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:01.238 09:41:54 -- setup/devices.sh@192 -- # setup reset 00:04:01.238 09:41:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.238 09:41:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.175 09:41:55 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:02.175 09:41:55 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:02.175 09:41:55 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:02.175 09:41:55 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:02.175 09:41:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.175 09:41:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:04:02.175 09:41:55 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:04:02.175 09:41:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:02.175 09:41:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.175 09:41:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.175 09:41:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:02.175 09:41:55 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:02.175 09:41:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.175 09:41:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.175 09:41:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.175 09:41:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:02.175 09:41:55 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:02.175 09:41:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:02.175 09:41:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.175 09:41:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.175 09:41:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:02.175 09:41:55 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:02.175 09:41:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:02.175 09:41:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.175 09:41:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.175 09:41:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:02.175 09:41:55 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:02.175 09:41:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:02.176 09:41:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.176 09:41:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.176 09:41:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:02.176 09:41:55 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:02.176 09:41:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:02.176 09:41:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.176 09:41:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.176 09:41:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:02.176 09:41:55 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:02.176 09:41:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:02.176 09:41:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.176 09:41:55 -- setup/devices.sh@196 -- # blocks=() 00:04:02.176 09:41:55 -- setup/devices.sh@196 -- # declare -a blocks 00:04:02.176 09:41:55 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:02.176 09:41:55 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:02.176 09:41:55 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:02.176 09:41:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:02.176 09:41:55 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:02.176 09:41:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:02.176 09:41:55 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:02.176 09:41:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:02.176 No valid GPT data, bailing 00:04:02.176 09:41:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.176 09:41:55 -- scripts/common.sh@393 -- # pt= 00:04:02.176 09:41:55 -- scripts/common.sh@394 -- # return 1 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:02.176 09:41:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:02.176 09:41:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:02.176 09:41:55 -- setup/common.sh@80 -- # echo 1073741824 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:02.176 09:41:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:02.176 09:41:55 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:02.176 09:41:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:02.176 09:41:55 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:02.176 09:41:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:02.176 No valid GPT data, bailing 00:04:02.176 09:41:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:02.176 09:41:55 -- scripts/common.sh@393 -- # pt= 00:04:02.176 09:41:55 -- scripts/common.sh@394 -- # return 1 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:02.176 09:41:55 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:02.176 09:41:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:02.176 09:41:55 -- setup/common.sh@80 -- # echo 4294967296 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:02.176 09:41:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.176 09:41:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:02.176 09:41:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:02.176 09:41:55 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:02.176 09:41:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:02.176 09:41:55 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:02.176 09:41:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:02.176 No valid GPT data, bailing 00:04:02.176 09:41:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:02.176 09:41:55 -- scripts/common.sh@393 -- # pt= 00:04:02.176 09:41:55 -- scripts/common.sh@394 -- # return 1 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:02.176 09:41:55 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:02.176 09:41:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:02.176 09:41:55 -- setup/common.sh@80 -- # echo 4294967296 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:02.176 09:41:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.176 09:41:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:02.176 09:41:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:02.176 09:41:55 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:02.176 09:41:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:02.176 09:41:55 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:02.176 09:41:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:02.176 No valid GPT data, bailing 00:04:02.176 09:41:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:02.176 09:41:55 -- scripts/common.sh@393 -- # pt= 00:04:02.176 09:41:55 -- scripts/common.sh@394 -- # return 1 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:02.176 09:41:55 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:02.176 09:41:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:02.176 09:41:55 -- setup/common.sh@80 -- # echo 4294967296 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:02.176 09:41:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.176 09:41:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:02.176 09:41:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:02.176 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:02.176 09:41:55 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:02.176 09:41:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:02.176 09:41:55 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:02.176 09:41:55 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:02.176 09:41:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:02.452 No valid GPT data, bailing 00:04:02.452 09:41:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:02.452 09:41:55 -- scripts/common.sh@393 -- # pt= 00:04:02.452 09:41:55 -- scripts/common.sh@394 -- # return 1 00:04:02.452 09:41:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:02.452 09:41:55 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:02.452 09:41:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:02.452 09:41:55 -- setup/common.sh@80 -- # echo 6343335936 00:04:02.452 09:41:55 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:02.452 09:41:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.452 09:41:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:02.452 09:41:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.452 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:02.452 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:02.452 09:41:55 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:02.452 09:41:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:02.452 09:41:55 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:02.452 09:41:55 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:02.452 09:41:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:02.452 No valid GPT data, bailing 00:04:02.452 09:41:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:02.452 09:41:56 -- scripts/common.sh@393 -- # pt= 00:04:02.452 09:41:56 -- scripts/common.sh@394 -- # return 1 00:04:02.452 09:41:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:02.452 09:41:56 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:02.452 09:41:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:02.452 09:41:56 -- setup/common.sh@80 -- # echo 5368709120 00:04:02.452 09:41:56 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:02.452 09:41:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.452 09:41:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:02.452 09:41:56 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:02.452 09:41:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:02.452 09:41:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:02.452 09:41:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:02.452 09:41:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:02.452 09:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:02.452 ************************************ 00:04:02.452 START TEST nvme_mount 00:04:02.452 ************************************ 00:04:02.452 09:41:56 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:02.452 09:41:56 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:02.452 09:41:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:02.452 09:41:56 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.452 09:41:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:02.452 09:41:56 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:02.452 09:41:56 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:02.452 09:41:56 -- setup/common.sh@40 -- # local part_no=1 00:04:02.452 09:41:56 -- setup/common.sh@41 -- # local size=1073741824 00:04:02.452 09:41:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:02.452 09:41:56 -- setup/common.sh@44 -- # parts=() 00:04:02.452 09:41:56 -- setup/common.sh@44 -- # local parts 00:04:02.452 09:41:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:02.452 09:41:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.452 09:41:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:02.452 09:41:56 -- setup/common.sh@46 -- # (( part++ )) 00:04:02.452 09:41:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.452 09:41:56 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:02.452 09:41:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:02.452 09:41:56 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:03.396 Creating new GPT entries in memory. 00:04:03.396 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:03.396 other utilities. 00:04:03.396 09:41:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:03.396 09:41:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.396 09:41:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.396 09:41:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.396 09:41:57 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:04.771 Creating new GPT entries in memory. 00:04:04.771 The operation has completed successfully. 00:04:04.771 09:41:58 -- setup/common.sh@57 -- # (( part++ )) 00:04:04.771 09:41:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.771 09:41:58 -- setup/common.sh@62 -- # wait 54224 00:04:04.771 09:41:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.771 09:41:58 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:04.771 09:41:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.771 09:41:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:04.771 09:41:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:04.771 09:41:58 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.771 09:41:58 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:04.771 09:41:58 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:04.771 09:41:58 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:04.771 09:41:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.771 09:41:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:04.771 09:41:58 -- setup/devices.sh@53 -- # local found=0 00:04:04.771 09:41:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.771 09:41:58 -- setup/devices.sh@56 -- # : 00:04:04.771 09:41:58 -- setup/devices.sh@59 -- # local pci status 00:04:04.771 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.771 09:41:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:04.771 09:41:58 -- setup/devices.sh@47 -- # setup output config 00:04:04.771 09:41:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.771 09:41:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:04.771 09:41:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:04.771 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.771 09:41:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:04.771 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.029 09:41:58 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:05.029 09:41:58 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:05.029 09:41:58 -- setup/devices.sh@63 -- # found=1 00:04:05.029 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.029 09:41:58 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:05.029 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.289 lsblk: /dev/nvme0c0n1: not a block device 00:04:05.289 09:41:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:05.289 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.289 09:41:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:05.289 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.548 09:41:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.548 09:41:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:05.548 09:41:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.548 09:41:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.548 09:41:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.548 09:41:59 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:05.548 09:41:59 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.548 09:41:59 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.548 09:41:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:05.548 09:41:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:05.548 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:05.548 09:41:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:05.548 09:41:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:05.807 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:05.807 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:05.807 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:05.807 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:05.807 09:41:59 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:05.807 09:41:59 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:05.807 09:41:59 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.807 09:41:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:05.807 09:41:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:05.807 09:41:59 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.807 09:41:59 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.807 09:41:59 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:05.807 09:41:59 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:05.807 09:41:59 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.807 09:41:59 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.807 09:41:59 -- setup/devices.sh@53 -- # local found=0 00:04:05.807 09:41:59 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.807 09:41:59 -- setup/devices.sh@56 -- # : 00:04:05.807 09:41:59 -- setup/devices.sh@59 -- # local pci status 00:04:05.807 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.807 09:41:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:05.807 09:41:59 -- setup/devices.sh@47 -- # setup output config 00:04:05.807 09:41:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.807 09:41:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:05.807 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:05.807 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.066 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.066 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.325 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.325 09:41:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:06.325 09:41:59 -- setup/devices.sh@63 -- # found=1 00:04:06.325 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.325 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.325 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.325 lsblk: /dev/nvme0c0n1: not a block device 00:04:06.583 09:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.583 09:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.583 09:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.583 09:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.583 09:42:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.583 09:42:00 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:06.583 09:42:00 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:06.583 09:42:00 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.583 09:42:00 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:06.583 09:42:00 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:06.583 09:42:00 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:06.583 09:42:00 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:06.583 09:42:00 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:06.583 09:42:00 -- setup/devices.sh@50 -- # local mount_point= 00:04:06.583 09:42:00 -- setup/devices.sh@51 -- # local test_file= 00:04:06.583 09:42:00 -- setup/devices.sh@53 -- # local found=0 00:04:06.583 09:42:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:06.583 09:42:00 -- setup/devices.sh@59 -- # local pci status 00:04:06.583 09:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.583 09:42:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:06.583 09:42:00 -- setup/devices.sh@47 -- # setup output config 00:04:06.583 09:42:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.583 09:42:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.841 09:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:06.841 09:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.099 09:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.099 09:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.357 09:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.357 09:42:00 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:07.357 09:42:00 -- setup/devices.sh@63 -- # found=1 00:04:07.357 09:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.357 09:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.357 09:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.357 lsblk: /dev/nvme0c0n1: not a block device 00:04:07.615 09:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.615 09:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.615 09:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.615 09:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.615 09:42:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.615 09:42:01 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:07.615 09:42:01 -- setup/devices.sh@68 -- # return 0 00:04:07.615 09:42:01 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:07.615 09:42:01 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.615 09:42:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:07.615 09:42:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:07.615 09:42:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:07.615 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:07.615 00:04:07.615 real 0m5.300s 00:04:07.615 user 0m1.328s 00:04:07.615 sys 0m1.712s 00:04:07.615 09:42:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.615 09:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:07.615 ************************************ 00:04:07.615 END TEST nvme_mount 00:04:07.615 ************************************ 00:04:07.874 09:42:01 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:07.874 09:42:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.874 09:42:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.874 09:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:07.874 ************************************ 00:04:07.874 START TEST dm_mount 00:04:07.874 ************************************ 00:04:07.874 09:42:01 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:07.874 09:42:01 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:07.874 09:42:01 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:07.874 09:42:01 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:07.874 09:42:01 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:07.874 09:42:01 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:07.874 09:42:01 -- setup/common.sh@40 -- # local part_no=2 00:04:07.874 09:42:01 -- setup/common.sh@41 -- # local size=1073741824 00:04:07.874 09:42:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:07.874 09:42:01 -- setup/common.sh@44 -- # parts=() 00:04:07.874 09:42:01 -- setup/common.sh@44 -- # local parts 00:04:07.874 09:42:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:07.874 09:42:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.874 09:42:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:07.874 09:42:01 -- setup/common.sh@46 -- # (( part++ )) 00:04:07.874 09:42:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.874 09:42:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:07.874 09:42:01 -- setup/common.sh@46 -- # (( part++ )) 00:04:07.874 09:42:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.874 09:42:01 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:07.874 09:42:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:07.874 09:42:01 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:08.820 Creating new GPT entries in memory. 00:04:08.820 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:08.820 other utilities. 00:04:08.820 09:42:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:08.820 09:42:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.820 09:42:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:08.820 09:42:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:08.820 09:42:02 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:09.756 Creating new GPT entries in memory. 00:04:09.756 The operation has completed successfully. 00:04:09.756 09:42:03 -- setup/common.sh@57 -- # (( part++ )) 00:04:09.756 09:42:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:09.756 09:42:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:09.756 09:42:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:09.756 09:42:03 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:11.132 The operation has completed successfully. 00:04:11.132 09:42:04 -- setup/common.sh@57 -- # (( part++ )) 00:04:11.132 09:42:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.132 09:42:04 -- setup/common.sh@62 -- # wait 54943 00:04:11.132 09:42:04 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:11.132 09:42:04 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:11.132 09:42:04 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:11.132 09:42:04 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:11.132 09:42:04 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:11.132 09:42:04 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:11.132 09:42:04 -- setup/devices.sh@161 -- # break 00:04:11.132 09:42:04 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:11.132 09:42:04 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:11.132 09:42:04 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:11.132 09:42:04 -- setup/devices.sh@166 -- # dm=dm-0 00:04:11.132 09:42:04 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:11.132 09:42:04 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:11.132 09:42:04 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:11.132 09:42:04 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:11.133 09:42:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:11.133 09:42:04 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:11.133 09:42:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:11.133 09:42:04 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:11.133 09:42:04 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:11.133 09:42:04 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:11.133 09:42:04 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:11.133 09:42:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:11.133 09:42:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:11.133 09:42:04 -- setup/devices.sh@53 -- # local found=0 00:04:11.133 09:42:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:11.133 09:42:04 -- setup/devices.sh@56 -- # : 00:04:11.133 09:42:04 -- setup/devices.sh@59 -- # local pci status 00:04:11.133 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.133 09:42:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:11.133 09:42:04 -- setup/devices.sh@47 -- # setup output config 00:04:11.133 09:42:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.133 09:42:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:11.133 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:11.133 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.133 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:11.133 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.391 09:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:11.391 09:42:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:11.391 09:42:05 -- setup/devices.sh@63 -- # found=1 00:04:11.391 09:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.391 09:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:11.391 09:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.649 lsblk: /dev/nvme0c0n1: not a block device 00:04:11.649 09:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:11.649 09:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.907 09:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:11.907 09:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.907 09:42:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.907 09:42:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:11.907 09:42:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:11.907 09:42:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:11.907 09:42:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:11.907 09:42:05 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:11.907 09:42:05 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:11.907 09:42:05 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:11.907 09:42:05 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:11.907 09:42:05 -- setup/devices.sh@50 -- # local mount_point= 00:04:11.907 09:42:05 -- setup/devices.sh@51 -- # local test_file= 00:04:11.907 09:42:05 -- setup/devices.sh@53 -- # local found=0 00:04:11.907 09:42:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:11.908 09:42:05 -- setup/devices.sh@59 -- # local pci status 00:04:11.908 09:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.908 09:42:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:11.908 09:42:05 -- setup/devices.sh@47 -- # setup output config 00:04:11.908 09:42:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.908 09:42:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:11.908 09:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:11.908 09:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.166 09:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.166 09:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.424 09:42:06 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.424 09:42:06 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:12.424 09:42:06 -- setup/devices.sh@63 -- # found=1 00:04:12.424 09:42:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.424 09:42:06 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.424 09:42:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.683 lsblk: /dev/nvme0c0n1: not a block device 00:04:12.683 09:42:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.683 09:42:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.683 09:42:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:12.683 09:42:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.942 09:42:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.942 09:42:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:12.942 09:42:06 -- setup/devices.sh@68 -- # return 0 00:04:12.942 09:42:06 -- setup/devices.sh@187 -- # cleanup_dm 00:04:12.942 09:42:06 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.942 09:42:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:12.942 09:42:06 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:12.942 09:42:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:12.942 09:42:06 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:12.942 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.942 09:42:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:12.942 09:42:06 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:12.942 00:04:12.942 real 0m5.118s 00:04:12.942 user 0m0.920s 00:04:12.942 sys 0m1.145s 00:04:12.942 09:42:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.942 09:42:06 -- common/autotest_common.sh@10 -- # set +x 00:04:12.942 ************************************ 00:04:12.942 END TEST dm_mount 00:04:12.942 ************************************ 00:04:12.942 09:42:06 -- setup/devices.sh@1 -- # cleanup 00:04:12.942 09:42:06 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:12.942 09:42:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.942 09:42:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:12.942 09:42:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:12.942 09:42:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:12.942 09:42:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:13.200 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:13.201 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:13.201 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:13.201 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:13.201 09:42:06 -- setup/devices.sh@12 -- # cleanup_dm 00:04:13.201 09:42:06 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:13.201 09:42:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:13.201 09:42:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:13.201 09:42:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:13.201 09:42:06 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:13.201 09:42:06 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:13.201 00:04:13.201 real 0m12.514s 00:04:13.201 user 0m3.162s 00:04:13.201 sys 0m3.738s 00:04:13.201 09:42:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.201 09:42:06 -- common/autotest_common.sh@10 -- # set +x 00:04:13.201 ************************************ 00:04:13.201 END TEST devices 00:04:13.201 ************************************ 00:04:13.201 00:04:13.201 real 0m43.751s 00:04:13.201 user 0m10.125s 00:04:13.201 sys 0m13.509s 00:04:13.201 09:42:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.201 09:42:06 -- common/autotest_common.sh@10 -- # set +x 00:04:13.201 ************************************ 00:04:13.201 END TEST setup.sh 00:04:13.201 ************************************ 00:04:13.201 09:42:06 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:13.473 Hugepages 00:04:13.474 node hugesize free / total 00:04:13.474 node0 1048576kB 0 / 0 00:04:13.474 node0 2048kB 2048 / 2048 00:04:13.474 00:04:13.474 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.474 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:13.735 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:13.735 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:13.994 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:13.994 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0c0n1 00:04:13.994 09:42:07 -- spdk/autotest.sh@141 -- # uname -s 00:04:13.994 09:42:07 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:13.994 09:42:07 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:13.994 09:42:07 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.930 lsblk: /dev/nvme0c0n1: not a block device 00:04:14.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.209 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.209 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.209 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.209 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.468 09:42:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:16.497 09:42:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:16.497 09:42:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:16.498 09:42:09 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.498 09:42:09 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:16.498 09:42:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:16.498 09:42:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:16.498 09:42:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.498 09:42:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:16.498 09:42:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:16.498 09:42:10 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:16.498 09:42:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:16.498 09:42:10 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.066 Waiting for block devices as requested 00:04:17.066 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.066 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.324 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.324 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:22.598 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:22.598 09:42:16 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:22.598 09:42:16 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme2 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:22.598 09:42:16 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:22.598 09:42:16 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:22.598 09:42:16 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1542 -- # continue 00:04:22.598 09:42:16 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:22.598 09:42:16 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme3 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:22.598 09:42:16 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:22.598 09:42:16 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:22.598 09:42:16 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1542 -- # continue 00:04:22.598 09:42:16 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:22.598 09:42:16 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:08.0/nvme/nvme 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:22.598 09:42:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:22.598 09:42:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:22.598 09:42:16 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:04:22.598 09:42:16 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:22.598 09:42:16 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:22.598 09:42:16 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:22.598 09:42:16 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1542 -- # continue 00:04:22.598 09:42:16 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:22.598 09:42:16 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:09.0/nvme/nvme 00:04:22.598 09:42:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:22.598 09:42:16 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:22.598 09:42:16 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:22.598 09:42:16 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:22.598 09:42:16 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:22.598 09:42:16 -- common/autotest_common.sh@1542 -- # continue 00:04:22.598 09:42:16 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:22.598 09:42:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:22.598 09:42:16 -- common/autotest_common.sh@10 -- # set +x 00:04:22.598 09:42:16 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:22.598 09:42:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:22.598 09:42:16 -- common/autotest_common.sh@10 -- # set +x 00:04:22.598 09:42:16 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.535 lsblk: /dev/nvme0c0n1: not a block device 00:04:23.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.794 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.794 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.794 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.794 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.053 09:42:17 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:24.053 09:42:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:24.053 09:42:17 -- common/autotest_common.sh@10 -- # set +x 00:04:24.053 09:42:17 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:24.053 09:42:17 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:24.053 09:42:17 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.053 09:42:17 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:24.053 09:42:17 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:24.053 09:42:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:24.053 09:42:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:24.053 09:42:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:24.053 09:42:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.053 09:42:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:24.053 09:42:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:24.054 09:42:17 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:24.054 09:42:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:24.054 09:42:17 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:24.054 09:42:17 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:24.054 09:42:17 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:24.054 09:42:17 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.054 09:42:17 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:24.054 09:42:17 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:24.054 09:42:17 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:24.054 09:42:17 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.054 09:42:17 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:24.054 09:42:17 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:24.054 09:42:17 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:24.054 09:42:17 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.054 09:42:17 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:24.054 09:42:17 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:24.054 09:42:17 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:24.054 09:42:17 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.054 09:42:17 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:24.054 09:42:17 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:24.054 09:42:17 -- common/autotest_common.sh@1578 -- # return 0 00:04:24.054 09:42:17 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:24.054 09:42:17 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:24.054 09:42:17 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:24.054 09:42:17 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:24.054 09:42:17 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:24.054 09:42:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:24.054 09:42:17 -- common/autotest_common.sh@10 -- # set +x 00:04:24.054 09:42:17 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:24.054 09:42:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.054 09:42:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.054 09:42:17 -- common/autotest_common.sh@10 -- # set +x 00:04:24.054 ************************************ 00:04:24.054 START TEST env 00:04:24.054 ************************************ 00:04:24.054 09:42:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:24.054 * Looking for test storage... 00:04:24.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:24.054 09:42:17 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:24.054 09:42:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.054 09:42:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.054 09:42:17 -- common/autotest_common.sh@10 -- # set +x 00:04:24.054 ************************************ 00:04:24.054 START TEST env_memory 00:04:24.054 ************************************ 00:04:24.054 09:42:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:24.313 00:04:24.313 00:04:24.313 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.313 http://cunit.sourceforge.net/ 00:04:24.313 00:04:24.313 00:04:24.313 Suite: memory 00:04:24.313 Test: alloc and free memory map ...[2024-06-10 09:42:17.893972] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:24.313 passed 00:04:24.313 Test: mem map translation ...[2024-06-10 09:42:17.955337] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:24.313 [2024-06-10 09:42:17.955417] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:24.313 [2024-06-10 09:42:17.955519] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:24.313 [2024-06-10 09:42:17.955547] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:24.313 passed 00:04:24.313 Test: mem map registration ...[2024-06-10 09:42:18.059211] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:24.313 [2024-06-10 09:42:18.059293] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:24.572 passed 00:04:24.572 Test: mem map adjacent registrations ...passed 00:04:24.572 00:04:24.572 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.572 suites 1 1 n/a 0 0 00:04:24.572 tests 4 4 4 0 0 00:04:24.572 asserts 152 152 152 0 n/a 00:04:24.572 00:04:24.572 Elapsed time = 0.358 seconds 00:04:24.572 ************************************ 00:04:24.572 END TEST env_memory 00:04:24.572 ************************************ 00:04:24.572 00:04:24.572 real 0m0.399s 00:04:24.572 user 0m0.361s 00:04:24.572 sys 0m0.032s 00:04:24.572 09:42:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.572 09:42:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.572 09:42:18 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:24.572 09:42:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.572 09:42:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.572 09:42:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.572 ************************************ 00:04:24.572 START TEST env_vtophys 00:04:24.572 ************************************ 00:04:24.572 09:42:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:24.572 EAL: lib.eal log level changed from notice to debug 00:04:24.572 EAL: Detected lcore 0 as core 0 on socket 0 00:04:24.572 EAL: Detected lcore 1 as core 0 on socket 0 00:04:24.572 EAL: Detected lcore 2 as core 0 on socket 0 00:04:24.572 EAL: Detected lcore 3 as core 0 on socket 0 00:04:24.572 EAL: Detected lcore 4 as core 0 on socket 0 00:04:24.572 EAL: Detected lcore 5 as core 0 on socket 0 00:04:24.572 EAL: Detected lcore 6 as core 0 on socket 0 00:04:24.572 EAL: Detected lcore 7 as core 0 on socket 0 00:04:24.572 EAL: Detected lcore 8 as core 0 on socket 0 00:04:24.572 EAL: Detected lcore 9 as core 0 on socket 0 00:04:24.572 EAL: Maximum logical cores by configuration: 128 00:04:24.572 EAL: Detected CPU lcores: 10 00:04:24.572 EAL: Detected NUMA nodes: 1 00:04:24.572 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:24.572 EAL: Detected shared linkage of DPDK 00:04:24.831 EAL: No shared files mode enabled, IPC will be disabled 00:04:24.831 EAL: Selected IOVA mode 'PA' 00:04:24.831 EAL: Probing VFIO support... 00:04:24.831 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:24.831 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:24.831 EAL: Ask a virtual area of 0x2e000 bytes 00:04:24.831 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:24.831 EAL: Setting up physically contiguous memory... 00:04:24.831 EAL: Setting maximum number of open files to 524288 00:04:24.831 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:24.831 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:24.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.831 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:24.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.831 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:24.831 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:24.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.831 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:24.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.831 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:24.831 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:24.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.831 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:24.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.831 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:24.831 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:24.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.831 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:24.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.831 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:24.831 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:24.831 EAL: Hugepages will be freed exactly as allocated. 00:04:24.831 EAL: No shared files mode enabled, IPC is disabled 00:04:24.831 EAL: No shared files mode enabled, IPC is disabled 00:04:24.831 EAL: TSC frequency is ~2200000 KHz 00:04:24.831 EAL: Main lcore 0 is ready (tid=7f5a40e1fa40;cpuset=[0]) 00:04:24.831 EAL: Trying to obtain current memory policy. 00:04:24.831 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.831 EAL: Restoring previous memory policy: 0 00:04:24.831 EAL: request: mp_malloc_sync 00:04:24.831 EAL: No shared files mode enabled, IPC is disabled 00:04:24.831 EAL: Heap on socket 0 was expanded by 2MB 00:04:24.832 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:24.832 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:24.832 EAL: Mem event callback 'spdk:(nil)' registered 00:04:24.832 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:24.832 00:04:24.832 00:04:24.832 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.832 http://cunit.sourceforge.net/ 00:04:24.832 00:04:24.832 00:04:24.832 Suite: components_suite 00:04:25.400 Test: vtophys_malloc_test ...passed 00:04:25.400 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:25.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.400 EAL: Restoring previous memory policy: 4 00:04:25.400 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.400 EAL: request: mp_malloc_sync 00:04:25.400 EAL: No shared files mode enabled, IPC is disabled 00:04:25.400 EAL: Heap on socket 0 was expanded by 4MB 00:04:25.400 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.400 EAL: request: mp_malloc_sync 00:04:25.400 EAL: No shared files mode enabled, IPC is disabled 00:04:25.400 EAL: Heap on socket 0 was shrunk by 4MB 00:04:25.400 EAL: Trying to obtain current memory policy. 00:04:25.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.400 EAL: Restoring previous memory policy: 4 00:04:25.400 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.400 EAL: request: mp_malloc_sync 00:04:25.400 EAL: No shared files mode enabled, IPC is disabled 00:04:25.400 EAL: Heap on socket 0 was expanded by 6MB 00:04:25.400 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.400 EAL: request: mp_malloc_sync 00:04:25.400 EAL: No shared files mode enabled, IPC is disabled 00:04:25.400 EAL: Heap on socket 0 was shrunk by 6MB 00:04:25.400 EAL: Trying to obtain current memory policy. 00:04:25.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.400 EAL: Restoring previous memory policy: 4 00:04:25.400 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.400 EAL: request: mp_malloc_sync 00:04:25.400 EAL: No shared files mode enabled, IPC is disabled 00:04:25.400 EAL: Heap on socket 0 was expanded by 10MB 00:04:25.400 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.400 EAL: request: mp_malloc_sync 00:04:25.400 EAL: No shared files mode enabled, IPC is disabled 00:04:25.400 EAL: Heap on socket 0 was shrunk by 10MB 00:04:25.400 EAL: Trying to obtain current memory policy. 00:04:25.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.401 EAL: Restoring previous memory policy: 4 00:04:25.401 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.401 EAL: request: mp_malloc_sync 00:04:25.401 EAL: No shared files mode enabled, IPC is disabled 00:04:25.401 EAL: Heap on socket 0 was expanded by 18MB 00:04:25.401 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.401 EAL: request: mp_malloc_sync 00:04:25.401 EAL: No shared files mode enabled, IPC is disabled 00:04:25.401 EAL: Heap on socket 0 was shrunk by 18MB 00:04:25.401 EAL: Trying to obtain current memory policy. 00:04:25.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.401 EAL: Restoring previous memory policy: 4 00:04:25.401 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.401 EAL: request: mp_malloc_sync 00:04:25.401 EAL: No shared files mode enabled, IPC is disabled 00:04:25.401 EAL: Heap on socket 0 was expanded by 34MB 00:04:25.401 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.401 EAL: request: mp_malloc_sync 00:04:25.401 EAL: No shared files mode enabled, IPC is disabled 00:04:25.401 EAL: Heap on socket 0 was shrunk by 34MB 00:04:25.401 EAL: Trying to obtain current memory policy. 00:04:25.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.401 EAL: Restoring previous memory policy: 4 00:04:25.401 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.401 EAL: request: mp_malloc_sync 00:04:25.401 EAL: No shared files mode enabled, IPC is disabled 00:04:25.401 EAL: Heap on socket 0 was expanded by 66MB 00:04:25.401 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.401 EAL: request: mp_malloc_sync 00:04:25.401 EAL: No shared files mode enabled, IPC is disabled 00:04:25.401 EAL: Heap on socket 0 was shrunk by 66MB 00:04:25.660 EAL: Trying to obtain current memory policy. 00:04:25.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.660 EAL: Restoring previous memory policy: 4 00:04:25.660 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.660 EAL: request: mp_malloc_sync 00:04:25.660 EAL: No shared files mode enabled, IPC is disabled 00:04:25.660 EAL: Heap on socket 0 was expanded by 130MB 00:04:25.660 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.660 EAL: request: mp_malloc_sync 00:04:25.660 EAL: No shared files mode enabled, IPC is disabled 00:04:25.660 EAL: Heap on socket 0 was shrunk by 130MB 00:04:25.919 EAL: Trying to obtain current memory policy. 00:04:25.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.919 EAL: Restoring previous memory policy: 4 00:04:25.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.919 EAL: request: mp_malloc_sync 00:04:25.919 EAL: No shared files mode enabled, IPC is disabled 00:04:25.919 EAL: Heap on socket 0 was expanded by 258MB 00:04:26.178 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.178 EAL: request: mp_malloc_sync 00:04:26.178 EAL: No shared files mode enabled, IPC is disabled 00:04:26.178 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.437 EAL: Trying to obtain current memory policy. 00:04:26.437 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.696 EAL: Restoring previous memory policy: 4 00:04:26.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.696 EAL: request: mp_malloc_sync 00:04:26.696 EAL: No shared files mode enabled, IPC is disabled 00:04:26.696 EAL: Heap on socket 0 was expanded by 514MB 00:04:27.264 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.264 EAL: request: mp_malloc_sync 00:04:27.264 EAL: No shared files mode enabled, IPC is disabled 00:04:27.264 EAL: Heap on socket 0 was shrunk by 514MB 00:04:27.832 EAL: Trying to obtain current memory policy. 00:04:27.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.092 EAL: Restoring previous memory policy: 4 00:04:28.092 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.092 EAL: request: mp_malloc_sync 00:04:28.092 EAL: No shared files mode enabled, IPC is disabled 00:04:28.092 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.475 EAL: request: mp_malloc_sync 00:04:29.475 EAL: No shared files mode enabled, IPC is disabled 00:04:29.475 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:30.869 passed 00:04:30.869 00:04:30.869 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.869 suites 1 1 n/a 0 0 00:04:30.869 tests 2 2 2 0 0 00:04:30.869 asserts 5376 5376 5376 0 n/a 00:04:30.869 00:04:30.869 Elapsed time = 5.979 seconds 00:04:30.869 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.869 EAL: request: mp_malloc_sync 00:04:30.869 EAL: No shared files mode enabled, IPC is disabled 00:04:30.869 EAL: Heap on socket 0 was shrunk by 2MB 00:04:30.869 EAL: No shared files mode enabled, IPC is disabled 00:04:30.869 EAL: No shared files mode enabled, IPC is disabled 00:04:30.869 EAL: No shared files mode enabled, IPC is disabled 00:04:30.869 00:04:30.869 real 0m6.288s 00:04:30.869 user 0m5.473s 00:04:30.869 sys 0m0.661s 00:04:30.869 ************************************ 00:04:30.869 END TEST env_vtophys 00:04:30.869 ************************************ 00:04:30.869 09:42:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.869 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:30.869 09:42:24 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:30.869 09:42:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.869 09:42:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.869 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:30.869 ************************************ 00:04:30.869 START TEST env_pci 00:04:30.869 ************************************ 00:04:30.869 09:42:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.128 00:04:31.128 00:04:31.128 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.128 http://cunit.sourceforge.net/ 00:04:31.128 00:04:31.128 00:04:31.128 Suite: pci 00:04:31.128 Test: pci_hook ...[2024-06-10 09:42:24.640934] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56819 has claimed it 00:04:31.128 passed 00:04:31.128 00:04:31.128 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.128 suites 1 1 n/a 0 0 00:04:31.128 tests 1 1 1 0 0 00:04:31.128 asserts 25 25 25 0 n/a 00:04:31.128 00:04:31.128 Elapsed time = 0.007 seconds 00:04:31.128 EAL: Cannot find device (10000:00:01.0) 00:04:31.128 EAL: Failed to attach device on primary process 00:04:31.128 00:04:31.128 real 0m0.075s 00:04:31.128 user 0m0.036s 00:04:31.128 sys 0m0.038s 00:04:31.128 09:42:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.128 ************************************ 00:04:31.128 END TEST env_pci 00:04:31.128 ************************************ 00:04:31.128 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:31.128 09:42:24 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.128 09:42:24 -- env/env.sh@15 -- # uname 00:04:31.128 09:42:24 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.128 09:42:24 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.128 09:42:24 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.128 09:42:24 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:31.128 09:42:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.128 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:31.128 ************************************ 00:04:31.128 START TEST env_dpdk_post_init 00:04:31.128 ************************************ 00:04:31.128 09:42:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.128 EAL: Detected CPU lcores: 10 00:04:31.128 EAL: Detected NUMA nodes: 1 00:04:31.128 EAL: Detected shared linkage of DPDK 00:04:31.128 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.128 EAL: Selected IOVA mode 'PA' 00:04:31.387 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.387 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:31.387 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:31.387 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:31.387 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:31.387 Starting DPDK initialization... 00:04:31.387 Starting SPDK post initialization... 00:04:31.387 SPDK NVMe probe 00:04:31.387 Attaching to 0000:00:06.0 00:04:31.387 Attaching to 0000:00:07.0 00:04:31.387 Attaching to 0000:00:08.0 00:04:31.387 Attaching to 0000:00:09.0 00:04:31.387 Attached to 0000:00:06.0 00:04:31.387 Attached to 0000:00:07.0 00:04:31.387 Attached to 0000:00:09.0 00:04:31.387 Attached to 0000:00:08.0 00:04:31.387 Cleaning up... 00:04:31.387 00:04:31.387 real 0m0.281s 00:04:31.387 user 0m0.094s 00:04:31.387 sys 0m0.090s 00:04:31.387 09:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.387 ************************************ 00:04:31.387 END TEST env_dpdk_post_init 00:04:31.387 ************************************ 00:04:31.387 09:42:25 -- common/autotest_common.sh@10 -- # set +x 00:04:31.387 09:42:25 -- env/env.sh@26 -- # uname 00:04:31.387 09:42:25 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:31.387 09:42:25 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.387 09:42:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.387 09:42:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.387 09:42:25 -- common/autotest_common.sh@10 -- # set +x 00:04:31.387 ************************************ 00:04:31.387 START TEST env_mem_callbacks 00:04:31.387 ************************************ 00:04:31.387 09:42:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.387 EAL: Detected CPU lcores: 10 00:04:31.387 EAL: Detected NUMA nodes: 1 00:04:31.387 EAL: Detected shared linkage of DPDK 00:04:31.387 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.387 EAL: Selected IOVA mode 'PA' 00:04:31.646 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.646 00:04:31.646 00:04:31.646 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.646 http://cunit.sourceforge.net/ 00:04:31.646 00:04:31.646 00:04:31.646 Suite: memory 00:04:31.646 Test: test ... 00:04:31.646 register 0x200000200000 2097152 00:04:31.646 malloc 3145728 00:04:31.646 register 0x200000400000 4194304 00:04:31.646 buf 0x2000004fffc0 len 3145728 PASSED 00:04:31.646 malloc 64 00:04:31.646 buf 0x2000004ffec0 len 64 PASSED 00:04:31.646 malloc 4194304 00:04:31.646 register 0x200000800000 6291456 00:04:31.646 buf 0x2000009fffc0 len 4194304 PASSED 00:04:31.646 free 0x2000004fffc0 3145728 00:04:31.646 free 0x2000004ffec0 64 00:04:31.646 unregister 0x200000400000 4194304 PASSED 00:04:31.646 free 0x2000009fffc0 4194304 00:04:31.646 unregister 0x200000800000 6291456 PASSED 00:04:31.646 malloc 8388608 00:04:31.646 register 0x200000400000 10485760 00:04:31.646 buf 0x2000005fffc0 len 8388608 PASSED 00:04:31.646 free 0x2000005fffc0 8388608 00:04:31.646 unregister 0x200000400000 10485760 PASSED 00:04:31.646 passed 00:04:31.646 00:04:31.646 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.646 suites 1 1 n/a 0 0 00:04:31.646 tests 1 1 1 0 0 00:04:31.646 asserts 15 15 15 0 n/a 00:04:31.646 00:04:31.646 Elapsed time = 0.049 seconds 00:04:31.646 00:04:31.646 real 0m0.249s 00:04:31.646 user 0m0.082s 00:04:31.646 sys 0m0.063s 00:04:31.646 09:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.646 09:42:25 -- common/autotest_common.sh@10 -- # set +x 00:04:31.646 ************************************ 00:04:31.646 END TEST env_mem_callbacks 00:04:31.646 ************************************ 00:04:31.646 ************************************ 00:04:31.646 END TEST env 00:04:31.646 ************************************ 00:04:31.646 00:04:31.646 real 0m7.643s 00:04:31.646 user 0m6.165s 00:04:31.646 sys 0m1.092s 00:04:31.646 09:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.646 09:42:25 -- common/autotest_common.sh@10 -- # set +x 00:04:31.646 09:42:25 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:31.646 09:42:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.646 09:42:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.646 09:42:25 -- common/autotest_common.sh@10 -- # set +x 00:04:31.906 ************************************ 00:04:31.906 START TEST rpc 00:04:31.906 ************************************ 00:04:31.906 09:42:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:31.906 * Looking for test storage... 00:04:31.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.906 09:42:25 -- rpc/rpc.sh@65 -- # spdk_pid=56932 00:04:31.906 09:42:25 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.906 09:42:25 -- rpc/rpc.sh@67 -- # waitforlisten 56932 00:04:31.906 09:42:25 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:31.906 09:42:25 -- common/autotest_common.sh@819 -- # '[' -z 56932 ']' 00:04:31.906 09:42:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.906 09:42:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:31.906 09:42:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.906 09:42:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:31.906 09:42:25 -- common/autotest_common.sh@10 -- # set +x 00:04:31.906 [2024-06-10 09:42:25.614425] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:04:31.906 [2024-06-10 09:42:25.614793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56932 ] 00:04:32.164 [2024-06-10 09:42:25.779675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.165 [2024-06-10 09:42:25.921772] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:32.165 [2024-06-10 09:42:25.922228] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.165 [2024-06-10 09:42:25.922410] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56932' to capture a snapshot of events at runtime. 00:04:32.165 [2024-06-10 09:42:25.922528] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56932 for offline analysis/debug. 00:04:32.165 [2024-06-10 09:42:25.922578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.544 09:42:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:33.544 09:42:27 -- common/autotest_common.sh@852 -- # return 0 00:04:33.544 09:42:27 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.544 09:42:27 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.544 09:42:27 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.544 09:42:27 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.544 09:42:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.544 09:42:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.544 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.544 ************************************ 00:04:33.544 START TEST rpc_integrity 00:04:33.544 ************************************ 00:04:33.544 09:42:27 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:33.544 09:42:27 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.544 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.544 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.544 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.544 09:42:27 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.544 09:42:27 -- rpc/rpc.sh@13 -- # jq length 00:04:33.803 09:42:27 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.803 09:42:27 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.803 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.803 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.803 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.803 09:42:27 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.803 09:42:27 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.803 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.803 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.803 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.803 09:42:27 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.803 { 00:04:33.803 "name": "Malloc0", 00:04:33.803 "aliases": [ 00:04:33.803 "9f238b03-b344-46f0-aad5-8078a781ea91" 00:04:33.803 ], 00:04:33.803 "product_name": "Malloc disk", 00:04:33.803 "block_size": 512, 00:04:33.803 "num_blocks": 16384, 00:04:33.803 "uuid": "9f238b03-b344-46f0-aad5-8078a781ea91", 00:04:33.803 "assigned_rate_limits": { 00:04:33.803 "rw_ios_per_sec": 0, 00:04:33.803 "rw_mbytes_per_sec": 0, 00:04:33.803 "r_mbytes_per_sec": 0, 00:04:33.803 "w_mbytes_per_sec": 0 00:04:33.803 }, 00:04:33.803 "claimed": false, 00:04:33.803 "zoned": false, 00:04:33.803 "supported_io_types": { 00:04:33.803 "read": true, 00:04:33.803 "write": true, 00:04:33.803 "unmap": true, 00:04:33.803 "write_zeroes": true, 00:04:33.803 "flush": true, 00:04:33.803 "reset": true, 00:04:33.803 "compare": false, 00:04:33.803 "compare_and_write": false, 00:04:33.803 "abort": true, 00:04:33.803 "nvme_admin": false, 00:04:33.803 "nvme_io": false 00:04:33.803 }, 00:04:33.803 "memory_domains": [ 00:04:33.803 { 00:04:33.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.803 "dma_device_type": 2 00:04:33.803 } 00:04:33.803 ], 00:04:33.803 "driver_specific": {} 00:04:33.803 } 00:04:33.803 ]' 00:04:33.803 09:42:27 -- rpc/rpc.sh@17 -- # jq length 00:04:33.803 09:42:27 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.803 09:42:27 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.803 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.803 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.803 [2024-06-10 09:42:27.435315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.803 [2024-06-10 09:42:27.435403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.803 [2024-06-10 09:42:27.435439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:33.803 [2024-06-10 09:42:27.435458] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.803 [2024-06-10 09:42:27.438581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.804 [2024-06-10 09:42:27.438645] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.804 Passthru0 00:04:33.804 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.804 09:42:27 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.804 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.804 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.804 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.804 09:42:27 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.804 { 00:04:33.804 "name": "Malloc0", 00:04:33.804 "aliases": [ 00:04:33.804 "9f238b03-b344-46f0-aad5-8078a781ea91" 00:04:33.804 ], 00:04:33.804 "product_name": "Malloc disk", 00:04:33.804 "block_size": 512, 00:04:33.804 "num_blocks": 16384, 00:04:33.804 "uuid": "9f238b03-b344-46f0-aad5-8078a781ea91", 00:04:33.804 "assigned_rate_limits": { 00:04:33.804 "rw_ios_per_sec": 0, 00:04:33.804 "rw_mbytes_per_sec": 0, 00:04:33.804 "r_mbytes_per_sec": 0, 00:04:33.804 "w_mbytes_per_sec": 0 00:04:33.804 }, 00:04:33.804 "claimed": true, 00:04:33.804 "claim_type": "exclusive_write", 00:04:33.804 "zoned": false, 00:04:33.804 "supported_io_types": { 00:04:33.804 "read": true, 00:04:33.804 "write": true, 00:04:33.804 "unmap": true, 00:04:33.804 "write_zeroes": true, 00:04:33.804 "flush": true, 00:04:33.804 "reset": true, 00:04:33.804 "compare": false, 00:04:33.804 "compare_and_write": false, 00:04:33.804 "abort": true, 00:04:33.804 "nvme_admin": false, 00:04:33.804 "nvme_io": false 00:04:33.804 }, 00:04:33.804 "memory_domains": [ 00:04:33.804 { 00:04:33.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.804 "dma_device_type": 2 00:04:33.804 } 00:04:33.804 ], 00:04:33.804 "driver_specific": {} 00:04:33.804 }, 00:04:33.804 { 00:04:33.804 "name": "Passthru0", 00:04:33.804 "aliases": [ 00:04:33.804 "1889c5ba-2ef5-5c52-a303-11a614a7c21d" 00:04:33.804 ], 00:04:33.804 "product_name": "passthru", 00:04:33.804 "block_size": 512, 00:04:33.804 "num_blocks": 16384, 00:04:33.804 "uuid": "1889c5ba-2ef5-5c52-a303-11a614a7c21d", 00:04:33.804 "assigned_rate_limits": { 00:04:33.804 "rw_ios_per_sec": 0, 00:04:33.804 "rw_mbytes_per_sec": 0, 00:04:33.804 "r_mbytes_per_sec": 0, 00:04:33.804 "w_mbytes_per_sec": 0 00:04:33.804 }, 00:04:33.804 "claimed": false, 00:04:33.804 "zoned": false, 00:04:33.804 "supported_io_types": { 00:04:33.804 "read": true, 00:04:33.804 "write": true, 00:04:33.804 "unmap": true, 00:04:33.804 "write_zeroes": true, 00:04:33.804 "flush": true, 00:04:33.804 "reset": true, 00:04:33.804 "compare": false, 00:04:33.804 "compare_and_write": false, 00:04:33.804 "abort": true, 00:04:33.804 "nvme_admin": false, 00:04:33.804 "nvme_io": false 00:04:33.804 }, 00:04:33.804 "memory_domains": [ 00:04:33.804 { 00:04:33.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.804 "dma_device_type": 2 00:04:33.804 } 00:04:33.804 ], 00:04:33.804 "driver_specific": { 00:04:33.804 "passthru": { 00:04:33.804 "name": "Passthru0", 00:04:33.804 "base_bdev_name": "Malloc0" 00:04:33.804 } 00:04:33.804 } 00:04:33.804 } 00:04:33.804 ]' 00:04:33.804 09:42:27 -- rpc/rpc.sh@21 -- # jq length 00:04:33.804 09:42:27 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.804 09:42:27 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.804 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.804 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.804 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.804 09:42:27 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.804 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.804 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.804 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.804 09:42:27 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.804 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.804 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.804 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.804 09:42:27 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.804 09:42:27 -- rpc/rpc.sh@26 -- # jq length 00:04:34.063 ************************************ 00:04:34.063 END TEST rpc_integrity 00:04:34.063 ************************************ 00:04:34.063 09:42:27 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.063 00:04:34.063 real 0m0.347s 00:04:34.063 user 0m0.218s 00:04:34.063 sys 0m0.034s 00:04:34.063 09:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.063 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.063 09:42:27 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.063 09:42:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.063 09:42:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.063 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.063 ************************************ 00:04:34.063 START TEST rpc_plugins 00:04:34.063 ************************************ 00:04:34.063 09:42:27 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:34.063 09:42:27 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.063 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.063 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.063 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.063 09:42:27 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.063 09:42:27 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.063 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.063 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.063 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.063 09:42:27 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.063 { 00:04:34.063 "name": "Malloc1", 00:04:34.063 "aliases": [ 00:04:34.063 "b82c794f-69b8-4a16-a1e1-d6759fff9806" 00:04:34.063 ], 00:04:34.063 "product_name": "Malloc disk", 00:04:34.063 "block_size": 4096, 00:04:34.063 "num_blocks": 256, 00:04:34.063 "uuid": "b82c794f-69b8-4a16-a1e1-d6759fff9806", 00:04:34.063 "assigned_rate_limits": { 00:04:34.063 "rw_ios_per_sec": 0, 00:04:34.063 "rw_mbytes_per_sec": 0, 00:04:34.063 "r_mbytes_per_sec": 0, 00:04:34.063 "w_mbytes_per_sec": 0 00:04:34.063 }, 00:04:34.063 "claimed": false, 00:04:34.063 "zoned": false, 00:04:34.063 "supported_io_types": { 00:04:34.063 "read": true, 00:04:34.063 "write": true, 00:04:34.063 "unmap": true, 00:04:34.063 "write_zeroes": true, 00:04:34.063 "flush": true, 00:04:34.063 "reset": true, 00:04:34.063 "compare": false, 00:04:34.063 "compare_and_write": false, 00:04:34.063 "abort": true, 00:04:34.063 "nvme_admin": false, 00:04:34.063 "nvme_io": false 00:04:34.063 }, 00:04:34.063 "memory_domains": [ 00:04:34.063 { 00:04:34.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.063 "dma_device_type": 2 00:04:34.063 } 00:04:34.063 ], 00:04:34.063 "driver_specific": {} 00:04:34.063 } 00:04:34.063 ]' 00:04:34.063 09:42:27 -- rpc/rpc.sh@32 -- # jq length 00:04:34.063 09:42:27 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.063 09:42:27 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.063 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.063 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.063 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.063 09:42:27 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.063 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.063 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.063 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.063 09:42:27 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.063 09:42:27 -- rpc/rpc.sh@36 -- # jq length 00:04:34.322 ************************************ 00:04:34.322 END TEST rpc_plugins 00:04:34.322 ************************************ 00:04:34.322 09:42:27 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.322 00:04:34.322 real 0m0.163s 00:04:34.322 user 0m0.105s 00:04:34.322 sys 0m0.018s 00:04:34.322 09:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.322 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.322 09:42:27 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.322 09:42:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.322 09:42:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.322 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.322 ************************************ 00:04:34.322 START TEST rpc_trace_cmd_test 00:04:34.322 ************************************ 00:04:34.322 09:42:27 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:34.322 09:42:27 -- rpc/rpc.sh@40 -- # local info 00:04:34.322 09:42:27 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.322 09:42:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.322 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.322 09:42:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.322 09:42:27 -- rpc/rpc.sh@42 -- # info='{ 00:04:34.322 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56932", 00:04:34.322 "tpoint_group_mask": "0x8", 00:04:34.322 "iscsi_conn": { 00:04:34.322 "mask": "0x2", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "scsi": { 00:04:34.322 "mask": "0x4", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "bdev": { 00:04:34.322 "mask": "0x8", 00:04:34.322 "tpoint_mask": "0xffffffffffffffff" 00:04:34.322 }, 00:04:34.322 "nvmf_rdma": { 00:04:34.322 "mask": "0x10", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "nvmf_tcp": { 00:04:34.322 "mask": "0x20", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "ftl": { 00:04:34.322 "mask": "0x40", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "blobfs": { 00:04:34.322 "mask": "0x80", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "dsa": { 00:04:34.322 "mask": "0x200", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "thread": { 00:04:34.322 "mask": "0x400", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "nvme_pcie": { 00:04:34.322 "mask": "0x800", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "iaa": { 00:04:34.322 "mask": "0x1000", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "nvme_tcp": { 00:04:34.322 "mask": "0x2000", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 }, 00:04:34.322 "bdev_nvme": { 00:04:34.322 "mask": "0x4000", 00:04:34.322 "tpoint_mask": "0x0" 00:04:34.322 } 00:04:34.322 }' 00:04:34.322 09:42:27 -- rpc/rpc.sh@43 -- # jq length 00:04:34.322 09:42:27 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:34.322 09:42:27 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.322 09:42:28 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.322 09:42:28 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.581 09:42:28 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.581 09:42:28 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.581 09:42:28 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.581 09:42:28 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.582 ************************************ 00:04:34.582 END TEST rpc_trace_cmd_test 00:04:34.582 ************************************ 00:04:34.582 09:42:28 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.582 00:04:34.582 real 0m0.322s 00:04:34.582 user 0m0.285s 00:04:34.582 sys 0m0.025s 00:04:34.582 09:42:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.582 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.582 09:42:28 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.582 09:42:28 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.582 09:42:28 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.582 09:42:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.582 09:42:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.582 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.582 ************************************ 00:04:34.582 START TEST rpc_daemon_integrity 00:04:34.582 ************************************ 00:04:34.582 09:42:28 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:34.582 09:42:28 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.582 09:42:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.582 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.582 09:42:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.582 09:42:28 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.582 09:42:28 -- rpc/rpc.sh@13 -- # jq length 00:04:34.582 09:42:28 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.582 09:42:28 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.582 09:42:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.582 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.582 09:42:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.582 09:42:28 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.582 09:42:28 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.582 09:42:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.582 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.841 09:42:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.841 09:42:28 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.841 { 00:04:34.841 "name": "Malloc2", 00:04:34.841 "aliases": [ 00:04:34.841 "deb46dee-2d7b-43d1-808b-a2703b4c1e50" 00:04:34.841 ], 00:04:34.841 "product_name": "Malloc disk", 00:04:34.841 "block_size": 512, 00:04:34.841 "num_blocks": 16384, 00:04:34.841 "uuid": "deb46dee-2d7b-43d1-808b-a2703b4c1e50", 00:04:34.841 "assigned_rate_limits": { 00:04:34.841 "rw_ios_per_sec": 0, 00:04:34.841 "rw_mbytes_per_sec": 0, 00:04:34.841 "r_mbytes_per_sec": 0, 00:04:34.841 "w_mbytes_per_sec": 0 00:04:34.841 }, 00:04:34.841 "claimed": false, 00:04:34.841 "zoned": false, 00:04:34.841 "supported_io_types": { 00:04:34.841 "read": true, 00:04:34.841 "write": true, 00:04:34.841 "unmap": true, 00:04:34.841 "write_zeroes": true, 00:04:34.841 "flush": true, 00:04:34.841 "reset": true, 00:04:34.841 "compare": false, 00:04:34.841 "compare_and_write": false, 00:04:34.841 "abort": true, 00:04:34.841 "nvme_admin": false, 00:04:34.841 "nvme_io": false 00:04:34.841 }, 00:04:34.841 "memory_domains": [ 00:04:34.841 { 00:04:34.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.841 "dma_device_type": 2 00:04:34.841 } 00:04:34.841 ], 00:04:34.841 "driver_specific": {} 00:04:34.841 } 00:04:34.841 ]' 00:04:34.841 09:42:28 -- rpc/rpc.sh@17 -- # jq length 00:04:34.841 09:42:28 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.841 09:42:28 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:34.841 09:42:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.841 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.841 [2024-06-10 09:42:28.415647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.841 [2024-06-10 09:42:28.415780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.841 [2024-06-10 09:42:28.415813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:34.841 [2024-06-10 09:42:28.415830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.841 [2024-06-10 09:42:28.418776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.841 [2024-06-10 09:42:28.418839] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.841 Passthru0 00:04:34.841 09:42:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.841 09:42:28 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.841 09:42:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.841 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.842 09:42:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.842 09:42:28 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.842 { 00:04:34.842 "name": "Malloc2", 00:04:34.842 "aliases": [ 00:04:34.842 "deb46dee-2d7b-43d1-808b-a2703b4c1e50" 00:04:34.842 ], 00:04:34.842 "product_name": "Malloc disk", 00:04:34.842 "block_size": 512, 00:04:34.842 "num_blocks": 16384, 00:04:34.842 "uuid": "deb46dee-2d7b-43d1-808b-a2703b4c1e50", 00:04:34.842 "assigned_rate_limits": { 00:04:34.842 "rw_ios_per_sec": 0, 00:04:34.842 "rw_mbytes_per_sec": 0, 00:04:34.842 "r_mbytes_per_sec": 0, 00:04:34.842 "w_mbytes_per_sec": 0 00:04:34.842 }, 00:04:34.842 "claimed": true, 00:04:34.842 "claim_type": "exclusive_write", 00:04:34.842 "zoned": false, 00:04:34.842 "supported_io_types": { 00:04:34.842 "read": true, 00:04:34.842 "write": true, 00:04:34.842 "unmap": true, 00:04:34.842 "write_zeroes": true, 00:04:34.842 "flush": true, 00:04:34.842 "reset": true, 00:04:34.842 "compare": false, 00:04:34.842 "compare_and_write": false, 00:04:34.842 "abort": true, 00:04:34.842 "nvme_admin": false, 00:04:34.842 "nvme_io": false 00:04:34.842 }, 00:04:34.842 "memory_domains": [ 00:04:34.842 { 00:04:34.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.842 "dma_device_type": 2 00:04:34.842 } 00:04:34.842 ], 00:04:34.842 "driver_specific": {} 00:04:34.842 }, 00:04:34.842 { 00:04:34.842 "name": "Passthru0", 00:04:34.842 "aliases": [ 00:04:34.842 "9be6d78e-5ca5-505d-83c5-741445ea3ef1" 00:04:34.842 ], 00:04:34.842 "product_name": "passthru", 00:04:34.842 "block_size": 512, 00:04:34.842 "num_blocks": 16384, 00:04:34.842 "uuid": "9be6d78e-5ca5-505d-83c5-741445ea3ef1", 00:04:34.842 "assigned_rate_limits": { 00:04:34.842 "rw_ios_per_sec": 0, 00:04:34.842 "rw_mbytes_per_sec": 0, 00:04:34.842 "r_mbytes_per_sec": 0, 00:04:34.842 "w_mbytes_per_sec": 0 00:04:34.842 }, 00:04:34.842 "claimed": false, 00:04:34.842 "zoned": false, 00:04:34.842 "supported_io_types": { 00:04:34.842 "read": true, 00:04:34.842 "write": true, 00:04:34.842 "unmap": true, 00:04:34.842 "write_zeroes": true, 00:04:34.842 "flush": true, 00:04:34.842 "reset": true, 00:04:34.842 "compare": false, 00:04:34.842 "compare_and_write": false, 00:04:34.842 "abort": true, 00:04:34.842 "nvme_admin": false, 00:04:34.842 "nvme_io": false 00:04:34.842 }, 00:04:34.842 "memory_domains": [ 00:04:34.842 { 00:04:34.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.842 "dma_device_type": 2 00:04:34.842 } 00:04:34.842 ], 00:04:34.842 "driver_specific": { 00:04:34.842 "passthru": { 00:04:34.842 "name": "Passthru0", 00:04:34.842 "base_bdev_name": "Malloc2" 00:04:34.842 } 00:04:34.842 } 00:04:34.842 } 00:04:34.842 ]' 00:04:34.842 09:42:28 -- rpc/rpc.sh@21 -- # jq length 00:04:34.842 09:42:28 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.842 09:42:28 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.842 09:42:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.842 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.842 09:42:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.842 09:42:28 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.842 09:42:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.842 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.842 09:42:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.842 09:42:28 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.842 09:42:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.842 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:34.842 09:42:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.842 09:42:28 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.842 09:42:28 -- rpc/rpc.sh@26 -- # jq length 00:04:35.101 ************************************ 00:04:35.101 END TEST rpc_daemon_integrity 00:04:35.101 ************************************ 00:04:35.101 09:42:28 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.101 00:04:35.101 real 0m0.401s 00:04:35.101 user 0m0.268s 00:04:35.101 sys 0m0.042s 00:04:35.101 09:42:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.101 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:35.101 09:42:28 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.101 09:42:28 -- rpc/rpc.sh@84 -- # killprocess 56932 00:04:35.101 09:42:28 -- common/autotest_common.sh@926 -- # '[' -z 56932 ']' 00:04:35.101 09:42:28 -- common/autotest_common.sh@930 -- # kill -0 56932 00:04:35.101 09:42:28 -- common/autotest_common.sh@931 -- # uname 00:04:35.101 09:42:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:35.101 09:42:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56932 00:04:35.101 killing process with pid 56932 00:04:35.101 09:42:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:35.101 09:42:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:35.101 09:42:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56932' 00:04:35.101 09:42:28 -- common/autotest_common.sh@945 -- # kill 56932 00:04:35.101 09:42:28 -- common/autotest_common.sh@950 -- # wait 56932 00:04:37.007 00:04:37.007 real 0m5.125s 00:04:37.007 user 0m6.170s 00:04:37.007 sys 0m0.768s 00:04:37.007 09:42:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.007 ************************************ 00:04:37.007 END TEST rpc 00:04:37.007 ************************************ 00:04:37.007 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:04:37.007 09:42:30 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:37.007 09:42:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.007 09:42:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.007 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:04:37.007 ************************************ 00:04:37.007 START TEST rpc_client 00:04:37.007 ************************************ 00:04:37.007 09:42:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:37.007 * Looking for test storage... 00:04:37.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:37.007 09:42:30 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:37.007 OK 00:04:37.007 09:42:30 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:37.007 00:04:37.007 real 0m0.145s 00:04:37.007 user 0m0.076s 00:04:37.007 sys 0m0.075s 00:04:37.007 09:42:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.007 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:04:37.007 ************************************ 00:04:37.007 END TEST rpc_client 00:04:37.007 ************************************ 00:04:37.266 09:42:30 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:37.266 09:42:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.266 09:42:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.266 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:04:37.266 ************************************ 00:04:37.266 START TEST json_config 00:04:37.266 ************************************ 00:04:37.266 09:42:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:37.266 09:42:30 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.266 09:42:30 -- nvmf/common.sh@7 -- # uname -s 00:04:37.266 09:42:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.266 09:42:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.267 09:42:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.267 09:42:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.267 09:42:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.267 09:42:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.267 09:42:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.267 09:42:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.267 09:42:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.267 09:42:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.267 09:42:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ba251500-b233-4587-8b38-2bc1a120701d 00:04:37.267 09:42:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=ba251500-b233-4587-8b38-2bc1a120701d 00:04:37.267 09:42:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.267 09:42:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.267 09:42:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.267 09:42:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:37.267 09:42:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.267 09:42:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.267 09:42:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.267 09:42:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.267 09:42:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.267 09:42:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.267 09:42:30 -- paths/export.sh@5 -- # export PATH 00:04:37.267 09:42:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.267 09:42:30 -- nvmf/common.sh@46 -- # : 0 00:04:37.267 09:42:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:37.267 09:42:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:37.267 09:42:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:37.267 09:42:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.267 09:42:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.267 09:42:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:37.267 09:42:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:37.267 09:42:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:37.267 09:42:30 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:37.267 09:42:30 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:37.267 WARNING: No tests are enabled so not running JSON configuration tests 00:04:37.267 09:42:30 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:37.267 09:42:30 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:37.267 09:42:30 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:37.267 09:42:30 -- json_config/json_config.sh@27 -- # exit 0 00:04:37.267 00:04:37.267 real 0m0.080s 00:04:37.267 user 0m0.039s 00:04:37.267 sys 0m0.039s 00:04:37.267 ************************************ 00:04:37.267 END TEST json_config 00:04:37.267 ************************************ 00:04:37.267 09:42:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.267 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:04:37.267 09:42:30 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:37.267 09:42:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.267 09:42:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.267 09:42:30 -- common/autotest_common.sh@10 -- # set +x 00:04:37.267 ************************************ 00:04:37.267 START TEST json_config_extra_key 00:04:37.267 ************************************ 00:04:37.267 09:42:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:37.267 09:42:30 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.267 09:42:30 -- nvmf/common.sh@7 -- # uname -s 00:04:37.267 09:42:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.267 09:42:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.267 09:42:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.267 09:42:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.267 09:42:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.267 09:42:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.267 09:42:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.267 09:42:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.267 09:42:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.267 09:42:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.267 09:42:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ba251500-b233-4587-8b38-2bc1a120701d 00:04:37.267 09:42:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=ba251500-b233-4587-8b38-2bc1a120701d 00:04:37.267 09:42:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.267 09:42:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.267 09:42:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.267 09:42:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:37.267 09:42:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.267 09:42:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.267 09:42:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.267 09:42:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.267 09:42:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.267 09:42:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.267 09:42:30 -- paths/export.sh@5 -- # export PATH 00:04:37.267 09:42:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.267 09:42:30 -- nvmf/common.sh@46 -- # : 0 00:04:37.267 09:42:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:37.267 09:42:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:37.267 09:42:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:37.267 09:42:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.267 09:42:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.267 09:42:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:37.267 09:42:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:37.267 09:42:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:37.267 INFO: launching applications... 00:04:37.267 Waiting for target to run... 00:04:37.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=57231 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 57231 /var/tmp/spdk_tgt.sock 00:04:37.267 09:42:31 -- common/autotest_common.sh@819 -- # '[' -z 57231 ']' 00:04:37.267 09:42:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.267 09:42:31 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:37.267 09:42:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:37.267 09:42:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.268 09:42:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:37.268 09:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:37.526 [2024-06-10 09:42:31.123435] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:04:37.526 [2024-06-10 09:42:31.123613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57231 ] 00:04:37.784 [2024-06-10 09:42:31.464049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.043 [2024-06-10 09:42:31.649714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:38.043 [2024-06-10 09:42:31.649952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.419 00:04:39.419 INFO: shutting down applications... 00:04:39.419 09:42:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:39.419 09:42:32 -- common/autotest_common.sh@852 -- # return 0 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 57231 ]] 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 57231 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57231 00:04:39.419 09:42:32 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:39.677 09:42:33 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:39.677 09:42:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:39.677 09:42:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57231 00:04:39.677 09:42:33 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:40.244 09:42:33 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:40.244 09:42:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:40.244 09:42:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57231 00:04:40.244 09:42:33 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:40.810 09:42:34 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:40.810 09:42:34 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:40.810 09:42:34 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57231 00:04:40.810 09:42:34 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:41.068 09:42:34 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:41.069 09:42:34 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:41.069 09:42:34 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57231 00:04:41.069 09:42:34 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:41.636 09:42:35 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:41.636 09:42:35 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:41.636 09:42:35 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57231 00:04:41.636 SPDK target shutdown done 00:04:41.636 Success 00:04:41.636 09:42:35 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:41.636 09:42:35 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:41.636 09:42:35 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:41.636 09:42:35 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:41.636 09:42:35 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:41.636 00:04:41.636 real 0m4.353s 00:04:41.636 user 0m4.062s 00:04:41.636 sys 0m0.476s 00:04:41.636 ************************************ 00:04:41.636 END TEST json_config_extra_key 00:04:41.636 ************************************ 00:04:41.636 09:42:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.636 09:42:35 -- common/autotest_common.sh@10 -- # set +x 00:04:41.636 09:42:35 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.636 09:42:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.636 09:42:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.636 09:42:35 -- common/autotest_common.sh@10 -- # set +x 00:04:41.636 ************************************ 00:04:41.636 START TEST alias_rpc 00:04:41.636 ************************************ 00:04:41.636 09:42:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.895 * Looking for test storage... 00:04:41.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:41.895 09:42:35 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:41.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.895 09:42:35 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57335 00:04:41.895 09:42:35 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.895 09:42:35 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57335 00:04:41.895 09:42:35 -- common/autotest_common.sh@819 -- # '[' -z 57335 ']' 00:04:41.895 09:42:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.895 09:42:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:41.895 09:42:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.895 09:42:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:41.895 09:42:35 -- common/autotest_common.sh@10 -- # set +x 00:04:41.895 [2024-06-10 09:42:35.530761] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:04:41.895 [2024-06-10 09:42:35.531689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57335 ] 00:04:42.154 [2024-06-10 09:42:35.704968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.154 [2024-06-10 09:42:35.870045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.154 [2024-06-10 09:42:35.870560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.532 09:42:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:43.532 09:42:37 -- common/autotest_common.sh@852 -- # return 0 00:04:43.532 09:42:37 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:43.791 09:42:37 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57335 00:04:43.791 09:42:37 -- common/autotest_common.sh@926 -- # '[' -z 57335 ']' 00:04:43.791 09:42:37 -- common/autotest_common.sh@930 -- # kill -0 57335 00:04:43.791 09:42:37 -- common/autotest_common.sh@931 -- # uname 00:04:43.791 09:42:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:43.791 09:42:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57335 00:04:43.791 killing process with pid 57335 00:04:43.791 09:42:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:43.791 09:42:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:43.791 09:42:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57335' 00:04:43.791 09:42:37 -- common/autotest_common.sh@945 -- # kill 57335 00:04:43.791 09:42:37 -- common/autotest_common.sh@950 -- # wait 57335 00:04:45.714 ************************************ 00:04:45.714 END TEST alias_rpc 00:04:45.714 ************************************ 00:04:45.714 00:04:45.714 real 0m3.795s 00:04:45.714 user 0m4.138s 00:04:45.714 sys 0m0.465s 00:04:45.714 09:42:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.714 09:42:39 -- common/autotest_common.sh@10 -- # set +x 00:04:45.714 09:42:39 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:45.714 09:42:39 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:45.714 09:42:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.714 09:42:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.714 09:42:39 -- common/autotest_common.sh@10 -- # set +x 00:04:45.714 ************************************ 00:04:45.714 START TEST spdkcli_tcp 00:04:45.714 ************************************ 00:04:45.714 09:42:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:45.714 * Looking for test storage... 00:04:45.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:45.714 09:42:39 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:45.714 09:42:39 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:45.714 09:42:39 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:45.714 09:42:39 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:45.714 09:42:39 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:45.714 09:42:39 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:45.714 09:42:39 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:45.714 09:42:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:45.714 09:42:39 -- common/autotest_common.sh@10 -- # set +x 00:04:45.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.714 09:42:39 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57429 00:04:45.714 09:42:39 -- spdkcli/tcp.sh@27 -- # waitforlisten 57429 00:04:45.714 09:42:39 -- common/autotest_common.sh@819 -- # '[' -z 57429 ']' 00:04:45.714 09:42:39 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:45.714 09:42:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.714 09:42:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:45.714 09:42:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.714 09:42:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:45.714 09:42:39 -- common/autotest_common.sh@10 -- # set +x 00:04:45.714 [2024-06-10 09:42:39.383721] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:04:45.714 [2024-06-10 09:42:39.383909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57429 ] 00:04:45.973 [2024-06-10 09:42:39.555502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.973 [2024-06-10 09:42:39.715633] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:45.973 [2024-06-10 09:42:39.716147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.973 [2024-06-10 09:42:39.716177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.348 09:42:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.348 09:42:41 -- common/autotest_common.sh@852 -- # return 0 00:04:47.348 09:42:41 -- spdkcli/tcp.sh@31 -- # socat_pid=57454 00:04:47.348 09:42:41 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:47.348 09:42:41 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:47.607 [ 00:04:47.607 "bdev_malloc_delete", 00:04:47.607 "bdev_malloc_create", 00:04:47.607 "bdev_null_resize", 00:04:47.607 "bdev_null_delete", 00:04:47.607 "bdev_null_create", 00:04:47.607 "bdev_nvme_cuse_unregister", 00:04:47.607 "bdev_nvme_cuse_register", 00:04:47.607 "bdev_opal_new_user", 00:04:47.607 "bdev_opal_set_lock_state", 00:04:47.607 "bdev_opal_delete", 00:04:47.607 "bdev_opal_get_info", 00:04:47.607 "bdev_opal_create", 00:04:47.607 "bdev_nvme_opal_revert", 00:04:47.607 "bdev_nvme_opal_init", 00:04:47.607 "bdev_nvme_send_cmd", 00:04:47.607 "bdev_nvme_get_path_iostat", 00:04:47.607 "bdev_nvme_get_mdns_discovery_info", 00:04:47.607 "bdev_nvme_stop_mdns_discovery", 00:04:47.607 "bdev_nvme_start_mdns_discovery", 00:04:47.607 "bdev_nvme_set_multipath_policy", 00:04:47.607 "bdev_nvme_set_preferred_path", 00:04:47.607 "bdev_nvme_get_io_paths", 00:04:47.607 "bdev_nvme_remove_error_injection", 00:04:47.607 "bdev_nvme_add_error_injection", 00:04:47.607 "bdev_nvme_get_discovery_info", 00:04:47.607 "bdev_nvme_stop_discovery", 00:04:47.607 "bdev_nvme_start_discovery", 00:04:47.607 "bdev_nvme_get_controller_health_info", 00:04:47.607 "bdev_nvme_disable_controller", 00:04:47.607 "bdev_nvme_enable_controller", 00:04:47.607 "bdev_nvme_reset_controller", 00:04:47.607 "bdev_nvme_get_transport_statistics", 00:04:47.607 "bdev_nvme_apply_firmware", 00:04:47.607 "bdev_nvme_detach_controller", 00:04:47.607 "bdev_nvme_get_controllers", 00:04:47.607 "bdev_nvme_attach_controller", 00:04:47.607 "bdev_nvme_set_hotplug", 00:04:47.607 "bdev_nvme_set_options", 00:04:47.607 "bdev_passthru_delete", 00:04:47.607 "bdev_passthru_create", 00:04:47.607 "bdev_lvol_grow_lvstore", 00:04:47.607 "bdev_lvol_get_lvols", 00:04:47.607 "bdev_lvol_get_lvstores", 00:04:47.607 "bdev_lvol_delete", 00:04:47.607 "bdev_lvol_set_read_only", 00:04:47.607 "bdev_lvol_resize", 00:04:47.607 "bdev_lvol_decouple_parent", 00:04:47.607 "bdev_lvol_inflate", 00:04:47.607 "bdev_lvol_rename", 00:04:47.607 "bdev_lvol_clone_bdev", 00:04:47.607 "bdev_lvol_clone", 00:04:47.607 "bdev_lvol_snapshot", 00:04:47.607 "bdev_lvol_create", 00:04:47.607 "bdev_lvol_delete_lvstore", 00:04:47.607 "bdev_lvol_rename_lvstore", 00:04:47.607 "bdev_lvol_create_lvstore", 00:04:47.607 "bdev_raid_set_options", 00:04:47.607 "bdev_raid_remove_base_bdev", 00:04:47.607 "bdev_raid_add_base_bdev", 00:04:47.607 "bdev_raid_delete", 00:04:47.607 "bdev_raid_create", 00:04:47.607 "bdev_raid_get_bdevs", 00:04:47.607 "bdev_error_inject_error", 00:04:47.607 "bdev_error_delete", 00:04:47.607 "bdev_error_create", 00:04:47.607 "bdev_split_delete", 00:04:47.607 "bdev_split_create", 00:04:47.607 "bdev_delay_delete", 00:04:47.607 "bdev_delay_create", 00:04:47.607 "bdev_delay_update_latency", 00:04:47.607 "bdev_zone_block_delete", 00:04:47.607 "bdev_zone_block_create", 00:04:47.607 "blobfs_create", 00:04:47.607 "blobfs_detect", 00:04:47.607 "blobfs_set_cache_size", 00:04:47.607 "bdev_xnvme_delete", 00:04:47.607 "bdev_xnvme_create", 00:04:47.607 "bdev_aio_delete", 00:04:47.607 "bdev_aio_rescan", 00:04:47.607 "bdev_aio_create", 00:04:47.607 "bdev_ftl_set_property", 00:04:47.607 "bdev_ftl_get_properties", 00:04:47.607 "bdev_ftl_get_stats", 00:04:47.607 "bdev_ftl_unmap", 00:04:47.607 "bdev_ftl_unload", 00:04:47.607 "bdev_ftl_delete", 00:04:47.607 "bdev_ftl_load", 00:04:47.607 "bdev_ftl_create", 00:04:47.607 "bdev_virtio_attach_controller", 00:04:47.607 "bdev_virtio_scsi_get_devices", 00:04:47.607 "bdev_virtio_detach_controller", 00:04:47.607 "bdev_virtio_blk_set_hotplug", 00:04:47.607 "bdev_iscsi_delete", 00:04:47.607 "bdev_iscsi_create", 00:04:47.607 "bdev_iscsi_set_options", 00:04:47.607 "accel_error_inject_error", 00:04:47.607 "ioat_scan_accel_module", 00:04:47.607 "dsa_scan_accel_module", 00:04:47.607 "iaa_scan_accel_module", 00:04:47.607 "iscsi_set_options", 00:04:47.607 "iscsi_get_auth_groups", 00:04:47.607 "iscsi_auth_group_remove_secret", 00:04:47.607 "iscsi_auth_group_add_secret", 00:04:47.607 "iscsi_delete_auth_group", 00:04:47.607 "iscsi_create_auth_group", 00:04:47.608 "iscsi_set_discovery_auth", 00:04:47.608 "iscsi_get_options", 00:04:47.608 "iscsi_target_node_request_logout", 00:04:47.608 "iscsi_target_node_set_redirect", 00:04:47.608 "iscsi_target_node_set_auth", 00:04:47.608 "iscsi_target_node_add_lun", 00:04:47.608 "iscsi_get_connections", 00:04:47.608 "iscsi_portal_group_set_auth", 00:04:47.608 "iscsi_start_portal_group", 00:04:47.608 "iscsi_delete_portal_group", 00:04:47.608 "iscsi_create_portal_group", 00:04:47.608 "iscsi_get_portal_groups", 00:04:47.608 "iscsi_delete_target_node", 00:04:47.608 "iscsi_target_node_remove_pg_ig_maps", 00:04:47.608 "iscsi_target_node_add_pg_ig_maps", 00:04:47.608 "iscsi_create_target_node", 00:04:47.608 "iscsi_get_target_nodes", 00:04:47.608 "iscsi_delete_initiator_group", 00:04:47.608 "iscsi_initiator_group_remove_initiators", 00:04:47.608 "iscsi_initiator_group_add_initiators", 00:04:47.608 "iscsi_create_initiator_group", 00:04:47.608 "iscsi_get_initiator_groups", 00:04:47.608 "nvmf_set_crdt", 00:04:47.608 "nvmf_set_config", 00:04:47.608 "nvmf_set_max_subsystems", 00:04:47.608 "nvmf_subsystem_get_listeners", 00:04:47.608 "nvmf_subsystem_get_qpairs", 00:04:47.608 "nvmf_subsystem_get_controllers", 00:04:47.608 "nvmf_get_stats", 00:04:47.608 "nvmf_get_transports", 00:04:47.608 "nvmf_create_transport", 00:04:47.608 "nvmf_get_targets", 00:04:47.608 "nvmf_delete_target", 00:04:47.608 "nvmf_create_target", 00:04:47.608 "nvmf_subsystem_allow_any_host", 00:04:47.608 "nvmf_subsystem_remove_host", 00:04:47.608 "nvmf_subsystem_add_host", 00:04:47.608 "nvmf_subsystem_remove_ns", 00:04:47.608 "nvmf_subsystem_add_ns", 00:04:47.608 "nvmf_subsystem_listener_set_ana_state", 00:04:47.608 "nvmf_discovery_get_referrals", 00:04:47.608 "nvmf_discovery_remove_referral", 00:04:47.608 "nvmf_discovery_add_referral", 00:04:47.608 "nvmf_subsystem_remove_listener", 00:04:47.608 "nvmf_subsystem_add_listener", 00:04:47.608 "nvmf_delete_subsystem", 00:04:47.608 "nvmf_create_subsystem", 00:04:47.608 "nvmf_get_subsystems", 00:04:47.608 "env_dpdk_get_mem_stats", 00:04:47.608 "nbd_get_disks", 00:04:47.608 "nbd_stop_disk", 00:04:47.608 "nbd_start_disk", 00:04:47.608 "ublk_recover_disk", 00:04:47.608 "ublk_get_disks", 00:04:47.608 "ublk_stop_disk", 00:04:47.608 "ublk_start_disk", 00:04:47.608 "ublk_destroy_target", 00:04:47.608 "ublk_create_target", 00:04:47.608 "virtio_blk_create_transport", 00:04:47.608 "virtio_blk_get_transports", 00:04:47.608 "vhost_controller_set_coalescing", 00:04:47.608 "vhost_get_controllers", 00:04:47.608 "vhost_delete_controller", 00:04:47.608 "vhost_create_blk_controller", 00:04:47.608 "vhost_scsi_controller_remove_target", 00:04:47.608 "vhost_scsi_controller_add_target", 00:04:47.608 "vhost_start_scsi_controller", 00:04:47.608 "vhost_create_scsi_controller", 00:04:47.608 "thread_set_cpumask", 00:04:47.608 "framework_get_scheduler", 00:04:47.608 "framework_set_scheduler", 00:04:47.608 "framework_get_reactors", 00:04:47.608 "thread_get_io_channels", 00:04:47.608 "thread_get_pollers", 00:04:47.608 "thread_get_stats", 00:04:47.608 "framework_monitor_context_switch", 00:04:47.608 "spdk_kill_instance", 00:04:47.608 "log_enable_timestamps", 00:04:47.608 "log_get_flags", 00:04:47.608 "log_clear_flag", 00:04:47.608 "log_set_flag", 00:04:47.608 "log_get_level", 00:04:47.608 "log_set_level", 00:04:47.608 "log_get_print_level", 00:04:47.608 "log_set_print_level", 00:04:47.608 "framework_enable_cpumask_locks", 00:04:47.608 "framework_disable_cpumask_locks", 00:04:47.608 "framework_wait_init", 00:04:47.608 "framework_start_init", 00:04:47.608 "scsi_get_devices", 00:04:47.608 "bdev_get_histogram", 00:04:47.608 "bdev_enable_histogram", 00:04:47.608 "bdev_set_qos_limit", 00:04:47.608 "bdev_set_qd_sampling_period", 00:04:47.608 "bdev_get_bdevs", 00:04:47.608 "bdev_reset_iostat", 00:04:47.608 "bdev_get_iostat", 00:04:47.608 "bdev_examine", 00:04:47.608 "bdev_wait_for_examine", 00:04:47.608 "bdev_set_options", 00:04:47.608 "notify_get_notifications", 00:04:47.608 "notify_get_types", 00:04:47.608 "accel_get_stats", 00:04:47.608 "accel_set_options", 00:04:47.608 "accel_set_driver", 00:04:47.608 "accel_crypto_key_destroy", 00:04:47.608 "accel_crypto_keys_get", 00:04:47.608 "accel_crypto_key_create", 00:04:47.608 "accel_assign_opc", 00:04:47.608 "accel_get_module_info", 00:04:47.608 "accel_get_opc_assignments", 00:04:47.608 "vmd_rescan", 00:04:47.608 "vmd_remove_device", 00:04:47.608 "vmd_enable", 00:04:47.608 "sock_set_default_impl", 00:04:47.608 "sock_impl_set_options", 00:04:47.608 "sock_impl_get_options", 00:04:47.608 "iobuf_get_stats", 00:04:47.608 "iobuf_set_options", 00:04:47.608 "framework_get_pci_devices", 00:04:47.608 "framework_get_config", 00:04:47.608 "framework_get_subsystems", 00:04:47.608 "trace_get_info", 00:04:47.608 "trace_get_tpoint_group_mask", 00:04:47.608 "trace_disable_tpoint_group", 00:04:47.608 "trace_enable_tpoint_group", 00:04:47.608 "trace_clear_tpoint_mask", 00:04:47.608 "trace_set_tpoint_mask", 00:04:47.608 "spdk_get_version", 00:04:47.608 "rpc_get_methods" 00:04:47.608 ] 00:04:47.608 09:42:41 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:47.608 09:42:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:47.608 09:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:47.608 09:42:41 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:47.608 09:42:41 -- spdkcli/tcp.sh@38 -- # killprocess 57429 00:04:47.608 09:42:41 -- common/autotest_common.sh@926 -- # '[' -z 57429 ']' 00:04:47.608 09:42:41 -- common/autotest_common.sh@930 -- # kill -0 57429 00:04:47.608 09:42:41 -- common/autotest_common.sh@931 -- # uname 00:04:47.608 09:42:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:47.608 09:42:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57429 00:04:47.608 killing process with pid 57429 00:04:47.608 09:42:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:47.608 09:42:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:47.608 09:42:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57429' 00:04:47.608 09:42:41 -- common/autotest_common.sh@945 -- # kill 57429 00:04:47.608 09:42:41 -- common/autotest_common.sh@950 -- # wait 57429 00:04:49.512 ************************************ 00:04:49.512 END TEST spdkcli_tcp 00:04:49.512 ************************************ 00:04:49.512 00:04:49.512 real 0m3.889s 00:04:49.512 user 0m7.234s 00:04:49.512 sys 0m0.514s 00:04:49.512 09:42:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.512 09:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 09:42:43 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.512 09:42:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.512 09:42:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.512 09:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 ************************************ 00:04:49.512 START TEST dpdk_mem_utility 00:04:49.512 ************************************ 00:04:49.512 09:42:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.512 * Looking for test storage... 00:04:49.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:49.512 09:42:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:49.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.512 09:42:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57539 00:04:49.512 09:42:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57539 00:04:49.512 09:42:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.512 09:42:43 -- common/autotest_common.sh@819 -- # '[' -z 57539 ']' 00:04:49.512 09:42:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.512 09:42:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:49.512 09:42:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.512 09:42:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:49.512 09:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:49.771 [2024-06-10 09:42:43.306683] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:04:49.771 [2024-06-10 09:42:43.307057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57539 ] 00:04:49.771 [2024-06-10 09:42:43.472590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.030 [2024-06-10 09:42:43.627384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.030 [2024-06-10 09:42:43.627897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.409 09:42:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:51.409 09:42:44 -- common/autotest_common.sh@852 -- # return 0 00:04:51.409 09:42:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.409 09:42:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.409 09:42:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:51.409 09:42:44 -- common/autotest_common.sh@10 -- # set +x 00:04:51.409 { 00:04:51.409 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.409 } 00:04:51.409 09:42:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:51.409 09:42:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.409 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:51.409 1 heaps totaling size 820.000000 MiB 00:04:51.409 size: 820.000000 MiB heap id: 0 00:04:51.409 end heaps---------- 00:04:51.409 8 mempools totaling size 598.116089 MiB 00:04:51.409 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:51.409 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:51.409 size: 84.521057 MiB name: bdev_io_57539 00:04:51.409 size: 51.011292 MiB name: evtpool_57539 00:04:51.409 size: 50.003479 MiB name: msgpool_57539 00:04:51.409 size: 21.763794 MiB name: PDU_Pool 00:04:51.409 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:51.409 size: 0.026123 MiB name: Session_Pool 00:04:51.409 end mempools------- 00:04:51.409 6 memzones totaling size 4.142822 MiB 00:04:51.409 size: 1.000366 MiB name: RG_ring_0_57539 00:04:51.409 size: 1.000366 MiB name: RG_ring_1_57539 00:04:51.409 size: 1.000366 MiB name: RG_ring_4_57539 00:04:51.409 size: 1.000366 MiB name: RG_ring_5_57539 00:04:51.409 size: 0.125366 MiB name: RG_ring_2_57539 00:04:51.409 size: 0.015991 MiB name: RG_ring_3_57539 00:04:51.409 end memzones------- 00:04:51.409 09:42:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.409 heap id: 0 total size: 820.000000 MiB number of busy elements: 302 number of free elements: 18 00:04:51.409 list of free elements. size: 18.451050 MiB 00:04:51.409 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:51.409 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:51.409 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:51.409 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:51.409 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:51.409 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:51.409 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:51.409 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:51.409 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:51.409 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:51.409 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:51.409 element at address: 0x200000200000 with size: 0.829224 MiB 00:04:51.409 element at address: 0x20001b000000 with size: 0.564636 MiB 00:04:51.409 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:51.409 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:51.409 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:51.409 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:51.409 element at address: 0x200003a00000 with size: 0.351990 MiB 00:04:51.409 list of standard malloc elements. size: 199.284546 MiB 00:04:51.409 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:51.409 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:51.409 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:51.409 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:51.409 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:51.409 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:51.409 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:51.409 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:51.409 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:51.409 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:51.409 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:51.409 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:51.409 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:51.410 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:51.411 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:51.411 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:51.411 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:51.411 list of memzone associated elements. size: 602.264404 MiB 00:04:51.411 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:51.411 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.411 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:51.411 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.411 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:51.411 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_57539_0 00:04:51.411 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:51.411 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57539_0 00:04:51.411 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:51.412 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57539_0 00:04:51.412 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:51.412 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.412 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:51.412 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.412 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:51.412 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57539 00:04:51.412 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:51.412 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57539 00:04:51.412 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:51.412 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57539 00:04:51.412 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:51.412 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.412 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:51.412 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.412 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:51.412 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.412 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:51.412 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.412 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:51.412 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57539 00:04:51.412 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:51.412 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57539 00:04:51.412 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:51.412 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57539 00:04:51.412 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:51.412 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57539 00:04:51.412 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:51.412 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57539 00:04:51.412 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:51.412 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.412 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:51.412 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.412 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:51.412 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.412 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:51.412 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57539 00:04:51.412 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:51.412 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.412 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:51.412 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.412 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:51.412 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57539 00:04:51.412 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:51.412 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.412 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:51.412 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57539 00:04:51.412 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:51.412 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57539 00:04:51.412 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:51.412 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.412 09:42:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.412 09:42:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57539 00:04:51.412 09:42:45 -- common/autotest_common.sh@926 -- # '[' -z 57539 ']' 00:04:51.412 09:42:45 -- common/autotest_common.sh@930 -- # kill -0 57539 00:04:51.412 09:42:45 -- common/autotest_common.sh@931 -- # uname 00:04:51.412 09:42:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:51.412 09:42:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57539 00:04:51.412 killing process with pid 57539 00:04:51.412 09:42:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:51.412 09:42:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:51.412 09:42:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57539' 00:04:51.412 09:42:45 -- common/autotest_common.sh@945 -- # kill 57539 00:04:51.412 09:42:45 -- common/autotest_common.sh@950 -- # wait 57539 00:04:53.317 ************************************ 00:04:53.317 END TEST dpdk_mem_utility 00:04:53.317 ************************************ 00:04:53.317 00:04:53.317 real 0m3.720s 00:04:53.317 user 0m3.941s 00:04:53.317 sys 0m0.466s 00:04:53.317 09:42:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.317 09:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:53.317 09:42:46 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.317 09:42:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.317 09:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.317 09:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:53.317 ************************************ 00:04:53.317 START TEST event 00:04:53.317 ************************************ 00:04:53.317 09:42:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.317 * Looking for test storage... 00:04:53.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:53.317 09:42:46 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:53.317 09:42:46 -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.317 09:42:46 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.317 09:42:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:53.317 09:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.317 09:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:53.317 ************************************ 00:04:53.317 START TEST event_perf 00:04:53.317 ************************************ 00:04:53.317 09:42:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.317 Running I/O for 1 seconds...[2024-06-10 09:42:47.022866] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:04:53.317 [2024-06-10 09:42:47.023193] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57640 ] 00:04:53.576 [2024-06-10 09:42:47.194368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.835 [2024-06-10 09:42:47.362222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.835 [2024-06-10 09:42:47.362366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.835 [2024-06-10 09:42:47.362497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.835 [2024-06-10 09:42:47.362511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.213 Running I/O for 1 seconds... 00:04:55.213 lcore 0: 196041 00:04:55.213 lcore 1: 196041 00:04:55.213 lcore 2: 196041 00:04:55.213 lcore 3: 196042 00:04:55.213 done. 00:04:55.213 00:04:55.213 real 0m1.702s 00:04:55.213 user 0m4.484s 00:04:55.213 sys 0m0.095s 00:04:55.213 09:42:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.213 ************************************ 00:04:55.213 END TEST event_perf 00:04:55.213 ************************************ 00:04:55.213 09:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:55.213 09:42:48 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.213 09:42:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:55.213 09:42:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.213 09:42:48 -- common/autotest_common.sh@10 -- # set +x 00:04:55.213 ************************************ 00:04:55.213 START TEST event_reactor 00:04:55.213 ************************************ 00:04:55.213 09:42:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.213 [2024-06-10 09:42:48.761640] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:04:55.213 [2024-06-10 09:42:48.762286] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57685 ] 00:04:55.213 [2024-06-10 09:42:48.915631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.472 [2024-06-10 09:42:49.074929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.891 test_start 00:04:56.891 oneshot 00:04:56.891 tick 100 00:04:56.891 tick 100 00:04:56.891 tick 250 00:04:56.891 tick 100 00:04:56.891 tick 100 00:04:56.891 tick 100 00:04:56.891 tick 250 00:04:56.891 tick 500 00:04:56.891 tick 100 00:04:56.892 tick 100 00:04:56.892 tick 250 00:04:56.892 tick 100 00:04:56.892 tick 100 00:04:56.892 test_end 00:04:56.892 ************************************ 00:04:56.892 END TEST event_reactor 00:04:56.892 ************************************ 00:04:56.892 00:04:56.892 real 0m1.663s 00:04:56.892 user 0m1.479s 00:04:56.892 sys 0m0.075s 00:04:56.892 09:42:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.892 09:42:50 -- common/autotest_common.sh@10 -- # set +x 00:04:56.892 09:42:50 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.892 09:42:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:56.892 09:42:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.892 09:42:50 -- common/autotest_common.sh@10 -- # set +x 00:04:56.892 ************************************ 00:04:56.892 START TEST event_reactor_perf 00:04:56.892 ************************************ 00:04:56.892 09:42:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.892 [2024-06-10 09:42:50.486690] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:04:56.892 [2024-06-10 09:42:50.486855] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57716 ] 00:04:56.892 [2024-06-10 09:42:50.652377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.151 [2024-06-10 09:42:50.814512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.531 test_start 00:04:58.531 test_end 00:04:58.531 Performance: 313056 events per second 00:04:58.531 ************************************ 00:04:58.531 END TEST event_reactor_perf 00:04:58.531 ************************************ 00:04:58.531 00:04:58.531 real 0m1.651s 00:04:58.531 user 0m1.442s 00:04:58.531 sys 0m0.099s 00:04:58.531 09:42:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.531 09:42:52 -- common/autotest_common.sh@10 -- # set +x 00:04:58.531 09:42:52 -- event/event.sh@49 -- # uname -s 00:04:58.531 09:42:52 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:58.531 09:42:52 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.531 09:42:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.531 09:42:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.531 09:42:52 -- common/autotest_common.sh@10 -- # set +x 00:04:58.531 ************************************ 00:04:58.531 START TEST event_scheduler 00:04:58.531 ************************************ 00:04:58.531 09:42:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.531 * Looking for test storage... 00:04:58.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:58.531 09:42:52 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:58.531 09:42:52 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57783 00:04:58.531 09:42:52 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.531 09:42:52 -- scheduler/scheduler.sh@37 -- # waitforlisten 57783 00:04:58.531 09:42:52 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:58.531 09:42:52 -- common/autotest_common.sh@819 -- # '[' -z 57783 ']' 00:04:58.531 09:42:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.531 09:42:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.531 09:42:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.531 09:42:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.531 09:42:52 -- common/autotest_common.sh@10 -- # set +x 00:04:58.792 [2024-06-10 09:42:52.325681] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:04:58.792 [2024-06-10 09:42:52.325842] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57783 ] 00:04:58.792 [2024-06-10 09:42:52.500796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.050 [2024-06-10 09:42:52.728086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.050 [2024-06-10 09:42:52.728210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.050 [2024-06-10 09:42:52.729606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.050 [2024-06-10 09:42:52.729661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.617 09:42:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.617 09:42:53 -- common/autotest_common.sh@852 -- # return 0 00:04:59.617 09:42:53 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:59.617 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.617 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.617 POWER: Env isn't set yet! 00:04:59.617 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:59.617 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.617 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.617 POWER: Attempting to initialise PSTAT power management... 00:04:59.617 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.617 POWER: Cannot set governor of lcore 0 to performance 00:04:59.617 POWER: Attempting to initialise AMD PSTATE power management... 00:04:59.617 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.617 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.617 POWER: Attempting to initialise CPPC power management... 00:04:59.617 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.617 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.617 POWER: Attempting to initialise VM power management... 00:04:59.617 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:59.617 POWER: Unable to set Power Management Environment for lcore 0 00:04:59.617 [2024-06-10 09:42:53.263199] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:59.617 [2024-06-10 09:42:53.263224] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:59.617 [2024-06-10 09:42:53.263239] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:59.617 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.617 09:42:53 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:59.618 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.618 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 [2024-06-10 09:42:53.510412] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.876 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.876 09:42:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.876 09:42:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 ************************************ 00:04:59.876 START TEST scheduler_create_thread 00:04:59.876 ************************************ 00:04:59.876 09:42:53 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.876 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 2 00:04:59.876 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:59.876 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 3 00:04:59.876 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:59.876 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 4 00:04:59.876 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:59.876 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 5 00:04:59.876 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:59.876 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 6 00:04:59.876 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:59.876 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 7 00:04:59.876 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:59.876 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 8 00:04:59.876 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:59.876 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.876 9 00:04:59.876 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.876 09:42:53 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:59.876 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.876 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.877 10 00:04:59.877 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.877 09:42:53 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:59.877 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.877 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.877 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.877 09:42:53 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:59.877 09:42:53 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:59.877 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.877 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.877 09:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.877 09:42:53 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:59.877 09:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.877 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:01.254 09:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.254 09:42:54 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:01.254 09:42:54 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:01.254 09:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.254 09:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:02.190 ************************************ 00:05:02.190 END TEST scheduler_create_thread 00:05:02.190 ************************************ 00:05:02.190 09:42:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:02.190 00:05:02.190 real 0m2.136s 00:05:02.190 user 0m0.019s 00:05:02.190 sys 0m0.003s 00:05:02.190 09:42:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.190 09:42:55 -- common/autotest_common.sh@10 -- # set +x 00:05:02.190 09:42:55 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:02.190 09:42:55 -- scheduler/scheduler.sh@46 -- # killprocess 57783 00:05:02.190 09:42:55 -- common/autotest_common.sh@926 -- # '[' -z 57783 ']' 00:05:02.190 09:42:55 -- common/autotest_common.sh@930 -- # kill -0 57783 00:05:02.190 09:42:55 -- common/autotest_common.sh@931 -- # uname 00:05:02.190 09:42:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:02.190 09:42:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57783 00:05:02.190 killing process with pid 57783 00:05:02.190 09:42:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:02.190 09:42:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:02.190 09:42:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57783' 00:05:02.190 09:42:55 -- common/autotest_common.sh@945 -- # kill 57783 00:05:02.190 09:42:55 -- common/autotest_common.sh@950 -- # wait 57783 00:05:02.449 [2024-06-10 09:42:56.142950] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.387 00:05:03.387 real 0m4.994s 00:05:03.387 user 0m8.352s 00:05:03.387 sys 0m0.402s 00:05:03.387 ************************************ 00:05:03.387 END TEST event_scheduler 00:05:03.387 ************************************ 00:05:03.387 09:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.387 09:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:03.646 09:42:57 -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.646 09:42:57 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.646 09:42:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.646 09:42:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.646 09:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:03.646 ************************************ 00:05:03.646 START TEST app_repeat 00:05:03.646 ************************************ 00:05:03.646 09:42:57 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:03.646 09:42:57 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.646 09:42:57 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.646 09:42:57 -- event/event.sh@13 -- # local nbd_list 00:05:03.646 09:42:57 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.646 09:42:57 -- event/event.sh@14 -- # local bdev_list 00:05:03.646 09:42:57 -- event/event.sh@15 -- # local repeat_times=4 00:05:03.646 09:42:57 -- event/event.sh@17 -- # modprobe nbd 00:05:03.646 Process app_repeat pid: 57889 00:05:03.646 spdk_app_start Round 0 00:05:03.646 09:42:57 -- event/event.sh@19 -- # repeat_pid=57889 00:05:03.646 09:42:57 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.646 09:42:57 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:03.646 09:42:57 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57889' 00:05:03.646 09:42:57 -- event/event.sh@23 -- # for i in {0..2} 00:05:03.646 09:42:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.646 09:42:57 -- event/event.sh@25 -- # waitforlisten 57889 /var/tmp/spdk-nbd.sock 00:05:03.646 09:42:57 -- common/autotest_common.sh@819 -- # '[' -z 57889 ']' 00:05:03.646 09:42:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.646 09:42:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:03.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.646 09:42:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.646 09:42:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:03.646 09:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:03.646 [2024-06-10 09:42:57.263902] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:03.646 [2024-06-10 09:42:57.264060] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57889 ] 00:05:03.905 [2024-06-10 09:42:57.433293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.905 [2024-06-10 09:42:57.600661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.905 [2024-06-10 09:42:57.600671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.473 09:42:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:04.473 09:42:58 -- common/autotest_common.sh@852 -- # return 0 00:05:04.473 09:42:58 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.732 Malloc0 00:05:04.991 09:42:58 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.250 Malloc1 00:05:05.250 09:42:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@12 -- # local i 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.250 09:42:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.509 /dev/nbd0 00:05:05.509 09:42:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.509 09:42:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.509 09:42:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:05.509 09:42:59 -- common/autotest_common.sh@857 -- # local i 00:05:05.509 09:42:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:05.509 09:42:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:05.509 09:42:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:05.509 09:42:59 -- common/autotest_common.sh@861 -- # break 00:05:05.509 09:42:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:05.509 09:42:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:05.509 09:42:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.509 1+0 records in 00:05:05.509 1+0 records out 00:05:05.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327814 s, 12.5 MB/s 00:05:05.509 09:42:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.509 09:42:59 -- common/autotest_common.sh@874 -- # size=4096 00:05:05.509 09:42:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.509 09:42:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:05.509 09:42:59 -- common/autotest_common.sh@877 -- # return 0 00:05:05.509 09:42:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.509 09:42:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.509 09:42:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.769 /dev/nbd1 00:05:05.769 09:42:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.769 09:42:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.769 09:42:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:05.769 09:42:59 -- common/autotest_common.sh@857 -- # local i 00:05:05.769 09:42:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:05.769 09:42:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:05.769 09:42:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:05.769 09:42:59 -- common/autotest_common.sh@861 -- # break 00:05:05.769 09:42:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:05.769 09:42:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:05.769 09:42:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.769 1+0 records in 00:05:05.769 1+0 records out 00:05:05.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335658 s, 12.2 MB/s 00:05:05.769 09:42:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.769 09:42:59 -- common/autotest_common.sh@874 -- # size=4096 00:05:05.769 09:42:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.769 09:42:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:05.769 09:42:59 -- common/autotest_common.sh@877 -- # return 0 00:05:05.769 09:42:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.769 09:42:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.769 09:42:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.769 09:42:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.769 09:42:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.028 { 00:05:06.028 "nbd_device": "/dev/nbd0", 00:05:06.028 "bdev_name": "Malloc0" 00:05:06.028 }, 00:05:06.028 { 00:05:06.028 "nbd_device": "/dev/nbd1", 00:05:06.028 "bdev_name": "Malloc1" 00:05:06.028 } 00:05:06.028 ]' 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.028 { 00:05:06.028 "nbd_device": "/dev/nbd0", 00:05:06.028 "bdev_name": "Malloc0" 00:05:06.028 }, 00:05:06.028 { 00:05:06.028 "nbd_device": "/dev/nbd1", 00:05:06.028 "bdev_name": "Malloc1" 00:05:06.028 } 00:05:06.028 ]' 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.028 /dev/nbd1' 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.028 /dev/nbd1' 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.028 256+0 records in 00:05:06.028 256+0 records out 00:05:06.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010381 s, 101 MB/s 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.028 256+0 records in 00:05:06.028 256+0 records out 00:05:06.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286235 s, 36.6 MB/s 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.028 256+0 records in 00:05:06.028 256+0 records out 00:05:06.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028622 s, 36.6 MB/s 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.028 09:42:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@51 -- # local i 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.287 09:42:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@41 -- # break 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.546 09:43:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@41 -- # break 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.805 09:43:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@65 -- # true 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.078 09:43:00 -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.078 09:43:00 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.337 09:43:01 -- event/event.sh@35 -- # sleep 3 00:05:08.714 [2024-06-10 09:43:02.130045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.714 [2024-06-10 09:43:02.283537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.714 [2024-06-10 09:43:02.283544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.714 [2024-06-10 09:43:02.440939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.714 [2024-06-10 09:43:02.441004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.630 spdk_app_start Round 1 00:05:10.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.630 09:43:04 -- event/event.sh@23 -- # for i in {0..2} 00:05:10.631 09:43:04 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:10.631 09:43:04 -- event/event.sh@25 -- # waitforlisten 57889 /var/tmp/spdk-nbd.sock 00:05:10.631 09:43:04 -- common/autotest_common.sh@819 -- # '[' -z 57889 ']' 00:05:10.631 09:43:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.631 09:43:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:10.631 09:43:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.631 09:43:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:10.631 09:43:04 -- common/autotest_common.sh@10 -- # set +x 00:05:10.631 09:43:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.631 09:43:04 -- common/autotest_common.sh@852 -- # return 0 00:05:10.631 09:43:04 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.889 Malloc0 00:05:10.889 09:43:04 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.149 Malloc1 00:05:11.149 09:43:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@12 -- # local i 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.149 09:43:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.408 /dev/nbd0 00:05:11.409 09:43:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.409 09:43:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.409 09:43:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:11.409 09:43:05 -- common/autotest_common.sh@857 -- # local i 00:05:11.409 09:43:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:11.409 09:43:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:11.409 09:43:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:11.409 09:43:05 -- common/autotest_common.sh@861 -- # break 00:05:11.409 09:43:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:11.409 09:43:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:11.409 09:43:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.409 1+0 records in 00:05:11.409 1+0 records out 00:05:11.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550336 s, 7.4 MB/s 00:05:11.409 09:43:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.409 09:43:05 -- common/autotest_common.sh@874 -- # size=4096 00:05:11.409 09:43:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.409 09:43:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:11.409 09:43:05 -- common/autotest_common.sh@877 -- # return 0 00:05:11.409 09:43:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.409 09:43:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.409 09:43:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.668 /dev/nbd1 00:05:11.668 09:43:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.668 09:43:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.668 09:43:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:11.668 09:43:05 -- common/autotest_common.sh@857 -- # local i 00:05:11.668 09:43:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:11.668 09:43:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:11.668 09:43:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:11.668 09:43:05 -- common/autotest_common.sh@861 -- # break 00:05:11.668 09:43:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:11.668 09:43:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:11.668 09:43:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.668 1+0 records in 00:05:11.668 1+0 records out 00:05:11.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243665 s, 16.8 MB/s 00:05:11.668 09:43:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.668 09:43:05 -- common/autotest_common.sh@874 -- # size=4096 00:05:11.668 09:43:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.668 09:43:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:11.668 09:43:05 -- common/autotest_common.sh@877 -- # return 0 00:05:11.668 09:43:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.668 09:43:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.668 09:43:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.668 09:43:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.668 09:43:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.927 { 00:05:11.927 "nbd_device": "/dev/nbd0", 00:05:11.927 "bdev_name": "Malloc0" 00:05:11.927 }, 00:05:11.927 { 00:05:11.927 "nbd_device": "/dev/nbd1", 00:05:11.927 "bdev_name": "Malloc1" 00:05:11.927 } 00:05:11.927 ]' 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.927 { 00:05:11.927 "nbd_device": "/dev/nbd0", 00:05:11.927 "bdev_name": "Malloc0" 00:05:11.927 }, 00:05:11.927 { 00:05:11.927 "nbd_device": "/dev/nbd1", 00:05:11.927 "bdev_name": "Malloc1" 00:05:11.927 } 00:05:11.927 ]' 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.927 /dev/nbd1' 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.927 /dev/nbd1' 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.927 256+0 records in 00:05:11.927 256+0 records out 00:05:11.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00657014 s, 160 MB/s 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.927 09:43:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.187 256+0 records in 00:05:12.187 256+0 records out 00:05:12.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259353 s, 40.4 MB/s 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.187 256+0 records in 00:05:12.187 256+0 records out 00:05:12.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276547 s, 37.9 MB/s 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@51 -- # local i 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.187 09:43:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@41 -- # break 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.446 09:43:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@41 -- # break 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.705 09:43:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.965 09:43:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.965 09:43:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.965 09:43:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.965 09:43:06 -- bdev/nbd_common.sh@65 -- # true 00:05:12.965 09:43:06 -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.965 09:43:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.965 09:43:06 -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.965 09:43:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.965 09:43:06 -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.965 09:43:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.224 09:43:06 -- event/event.sh@35 -- # sleep 3 00:05:14.602 [2024-06-10 09:43:07.952991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.602 [2024-06-10 09:43:08.107968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.602 [2024-06-10 09:43:08.107972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.602 [2024-06-10 09:43:08.262095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.602 [2024-06-10 09:43:08.262211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.504 spdk_app_start Round 2 00:05:16.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.505 09:43:09 -- event/event.sh@23 -- # for i in {0..2} 00:05:16.505 09:43:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:16.505 09:43:09 -- event/event.sh@25 -- # waitforlisten 57889 /var/tmp/spdk-nbd.sock 00:05:16.505 09:43:09 -- common/autotest_common.sh@819 -- # '[' -z 57889 ']' 00:05:16.505 09:43:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.505 09:43:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:16.505 09:43:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.505 09:43:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:16.505 09:43:09 -- common/autotest_common.sh@10 -- # set +x 00:05:16.505 09:43:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.505 09:43:10 -- common/autotest_common.sh@852 -- # return 0 00:05:16.505 09:43:10 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.764 Malloc0 00:05:16.764 09:43:10 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.023 Malloc1 00:05:17.023 09:43:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@12 -- # local i 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.023 09:43:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.283 /dev/nbd0 00:05:17.283 09:43:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.283 09:43:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.283 09:43:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:17.283 09:43:10 -- common/autotest_common.sh@857 -- # local i 00:05:17.283 09:43:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:17.283 09:43:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:17.283 09:43:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:17.283 09:43:10 -- common/autotest_common.sh@861 -- # break 00:05:17.283 09:43:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:17.283 09:43:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:17.283 09:43:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.283 1+0 records in 00:05:17.283 1+0 records out 00:05:17.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277529 s, 14.8 MB/s 00:05:17.283 09:43:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.283 09:43:10 -- common/autotest_common.sh@874 -- # size=4096 00:05:17.283 09:43:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.283 09:43:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:17.283 09:43:10 -- common/autotest_common.sh@877 -- # return 0 00:05:17.283 09:43:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.283 09:43:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.283 09:43:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.542 /dev/nbd1 00:05:17.542 09:43:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.542 09:43:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.542 09:43:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:17.542 09:43:11 -- common/autotest_common.sh@857 -- # local i 00:05:17.542 09:43:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:17.542 09:43:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:17.542 09:43:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:17.542 09:43:11 -- common/autotest_common.sh@861 -- # break 00:05:17.542 09:43:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:17.542 09:43:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:17.542 09:43:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.542 1+0 records in 00:05:17.542 1+0 records out 00:05:17.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257651 s, 15.9 MB/s 00:05:17.542 09:43:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.542 09:43:11 -- common/autotest_common.sh@874 -- # size=4096 00:05:17.542 09:43:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.542 09:43:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:17.542 09:43:11 -- common/autotest_common.sh@877 -- # return 0 00:05:17.542 09:43:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.542 09:43:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.542 09:43:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.542 09:43:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.542 09:43:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.800 09:43:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.800 { 00:05:17.800 "nbd_device": "/dev/nbd0", 00:05:17.800 "bdev_name": "Malloc0" 00:05:17.800 }, 00:05:17.800 { 00:05:17.800 "nbd_device": "/dev/nbd1", 00:05:17.800 "bdev_name": "Malloc1" 00:05:17.800 } 00:05:17.800 ]' 00:05:17.800 09:43:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.800 { 00:05:17.800 "nbd_device": "/dev/nbd0", 00:05:17.800 "bdev_name": "Malloc0" 00:05:17.800 }, 00:05:17.800 { 00:05:17.800 "nbd_device": "/dev/nbd1", 00:05:17.800 "bdev_name": "Malloc1" 00:05:17.800 } 00:05:17.800 ]' 00:05:17.800 09:43:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.800 09:43:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.800 /dev/nbd1' 00:05:17.800 09:43:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.800 /dev/nbd1' 00:05:17.800 09:43:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.059 256+0 records in 00:05:18.059 256+0 records out 00:05:18.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063488 s, 165 MB/s 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.059 256+0 records in 00:05:18.059 256+0 records out 00:05:18.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261625 s, 40.1 MB/s 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.059 256+0 records in 00:05:18.059 256+0 records out 00:05:18.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0382903 s, 27.4 MB/s 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@51 -- # local i 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.059 09:43:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@41 -- # break 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.317 09:43:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@41 -- # break 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.574 09:43:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@65 -- # true 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.833 09:43:12 -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.833 09:43:12 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.400 09:43:12 -- event/event.sh@35 -- # sleep 3 00:05:20.363 [2024-06-10 09:43:13.839616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.363 [2024-06-10 09:43:13.998674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.363 [2024-06-10 09:43:13.998681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.622 [2024-06-10 09:43:14.146875] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.622 [2024-06-10 09:43:14.146975] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.524 09:43:15 -- event/event.sh@38 -- # waitforlisten 57889 /var/tmp/spdk-nbd.sock 00:05:22.524 09:43:15 -- common/autotest_common.sh@819 -- # '[' -z 57889 ']' 00:05:22.524 09:43:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.524 09:43:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.524 09:43:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.524 09:43:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.524 09:43:15 -- common/autotest_common.sh@10 -- # set +x 00:05:22.524 09:43:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:22.524 09:43:16 -- common/autotest_common.sh@852 -- # return 0 00:05:22.524 09:43:16 -- event/event.sh@39 -- # killprocess 57889 00:05:22.524 09:43:16 -- common/autotest_common.sh@926 -- # '[' -z 57889 ']' 00:05:22.524 09:43:16 -- common/autotest_common.sh@930 -- # kill -0 57889 00:05:22.524 09:43:16 -- common/autotest_common.sh@931 -- # uname 00:05:22.524 09:43:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:22.524 09:43:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57889 00:05:22.524 killing process with pid 57889 00:05:22.524 09:43:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:22.524 09:43:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:22.524 09:43:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57889' 00:05:22.524 09:43:16 -- common/autotest_common.sh@945 -- # kill 57889 00:05:22.524 09:43:16 -- common/autotest_common.sh@950 -- # wait 57889 00:05:23.459 spdk_app_start is called in Round 0. 00:05:23.459 Shutdown signal received, stop current app iteration 00:05:23.459 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:23.459 spdk_app_start is called in Round 1. 00:05:23.459 Shutdown signal received, stop current app iteration 00:05:23.459 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:23.459 spdk_app_start is called in Round 2. 00:05:23.459 Shutdown signal received, stop current app iteration 00:05:23.459 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:23.459 spdk_app_start is called in Round 3. 00:05:23.459 Shutdown signal received, stop current app iteration 00:05:23.459 ************************************ 00:05:23.459 END TEST app_repeat 00:05:23.459 ************************************ 00:05:23.459 09:43:17 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:23.459 09:43:17 -- event/event.sh@42 -- # return 0 00:05:23.459 00:05:23.459 real 0m19.855s 00:05:23.459 user 0m42.984s 00:05:23.459 sys 0m2.558s 00:05:23.459 09:43:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.459 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:23.459 09:43:17 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:23.459 09:43:17 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:23.459 09:43:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.459 09:43:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.459 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:23.459 ************************************ 00:05:23.459 START TEST cpu_locks 00:05:23.459 ************************************ 00:05:23.459 09:43:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:23.459 * Looking for test storage... 00:05:23.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:23.459 09:43:17 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:23.459 09:43:17 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:23.459 09:43:17 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:23.459 09:43:17 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:23.459 09:43:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.459 09:43:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.459 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:23.459 ************************************ 00:05:23.459 START TEST default_locks 00:05:23.459 ************************************ 00:05:23.459 09:43:17 -- common/autotest_common.sh@1104 -- # default_locks 00:05:23.459 09:43:17 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58333 00:05:23.459 09:43:17 -- event/cpu_locks.sh@47 -- # waitforlisten 58333 00:05:23.459 09:43:17 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.459 09:43:17 -- common/autotest_common.sh@819 -- # '[' -z 58333 ']' 00:05:23.459 09:43:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.460 09:43:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:23.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.460 09:43:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.460 09:43:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:23.460 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:23.718 [2024-06-10 09:43:17.322766] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:23.718 [2024-06-10 09:43:17.322915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58333 ] 00:05:23.977 [2024-06-10 09:43:17.491148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.977 [2024-06-10 09:43:17.652562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.977 [2024-06-10 09:43:17.652818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.418 09:43:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.418 09:43:18 -- common/autotest_common.sh@852 -- # return 0 00:05:25.418 09:43:18 -- event/cpu_locks.sh@49 -- # locks_exist 58333 00:05:25.418 09:43:18 -- event/cpu_locks.sh@22 -- # lslocks -p 58333 00:05:25.418 09:43:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.677 09:43:19 -- event/cpu_locks.sh@50 -- # killprocess 58333 00:05:25.677 09:43:19 -- common/autotest_common.sh@926 -- # '[' -z 58333 ']' 00:05:25.677 09:43:19 -- common/autotest_common.sh@930 -- # kill -0 58333 00:05:25.677 09:43:19 -- common/autotest_common.sh@931 -- # uname 00:05:25.677 09:43:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:25.677 09:43:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58333 00:05:25.677 killing process with pid 58333 00:05:25.677 09:43:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:25.677 09:43:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:25.677 09:43:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58333' 00:05:25.677 09:43:19 -- common/autotest_common.sh@945 -- # kill 58333 00:05:25.677 09:43:19 -- common/autotest_common.sh@950 -- # wait 58333 00:05:27.581 09:43:21 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58333 00:05:27.581 09:43:21 -- common/autotest_common.sh@640 -- # local es=0 00:05:27.581 09:43:21 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58333 00:05:27.581 09:43:21 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:27.581 09:43:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:27.581 09:43:21 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:27.581 09:43:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:27.581 09:43:21 -- common/autotest_common.sh@643 -- # waitforlisten 58333 00:05:27.582 09:43:21 -- common/autotest_common.sh@819 -- # '[' -z 58333 ']' 00:05:27.582 09:43:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.582 09:43:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:27.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.582 ERROR: process (pid: 58333) is no longer running 00:05:27.582 ************************************ 00:05:27.582 END TEST default_locks 00:05:27.582 ************************************ 00:05:27.582 09:43:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.582 09:43:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:27.582 09:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.582 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58333) - No such process 00:05:27.582 09:43:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:27.582 09:43:21 -- common/autotest_common.sh@852 -- # return 1 00:05:27.582 09:43:21 -- common/autotest_common.sh@643 -- # es=1 00:05:27.582 09:43:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:27.582 09:43:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:27.582 09:43:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:27.582 09:43:21 -- event/cpu_locks.sh@54 -- # no_locks 00:05:27.582 09:43:21 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.582 09:43:21 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.582 09:43:21 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.582 00:05:27.582 real 0m3.984s 00:05:27.582 user 0m4.301s 00:05:27.582 sys 0m0.624s 00:05:27.582 09:43:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.582 09:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.582 09:43:21 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:27.582 09:43:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.582 09:43:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.582 09:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.582 ************************************ 00:05:27.582 START TEST default_locks_via_rpc 00:05:27.582 ************************************ 00:05:27.582 09:43:21 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:27.582 09:43:21 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58410 00:05:27.582 09:43:21 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.582 09:43:21 -- event/cpu_locks.sh@63 -- # waitforlisten 58410 00:05:27.582 09:43:21 -- common/autotest_common.sh@819 -- # '[' -z 58410 ']' 00:05:27.582 09:43:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.582 09:43:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:27.582 09:43:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.582 09:43:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:27.582 09:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.582 [2024-06-10 09:43:21.333404] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:27.582 [2024-06-10 09:43:21.333735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58410 ] 00:05:27.840 [2024-06-10 09:43:21.490102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.099 [2024-06-10 09:43:21.648153] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.099 [2024-06-10 09:43:21.648725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.666 09:43:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.666 09:43:22 -- common/autotest_common.sh@852 -- # return 0 00:05:28.666 09:43:22 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:28.666 09:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.666 09:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.666 09:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.666 09:43:22 -- event/cpu_locks.sh@67 -- # no_locks 00:05:28.666 09:43:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.666 09:43:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.666 09:43:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.666 09:43:22 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.666 09:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.666 09:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.666 09:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.666 09:43:22 -- event/cpu_locks.sh@71 -- # locks_exist 58410 00:05:28.666 09:43:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.666 09:43:22 -- event/cpu_locks.sh@22 -- # lslocks -p 58410 00:05:29.234 09:43:22 -- event/cpu_locks.sh@73 -- # killprocess 58410 00:05:29.234 09:43:22 -- common/autotest_common.sh@926 -- # '[' -z 58410 ']' 00:05:29.234 09:43:22 -- common/autotest_common.sh@930 -- # kill -0 58410 00:05:29.234 09:43:22 -- common/autotest_common.sh@931 -- # uname 00:05:29.234 09:43:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:29.234 09:43:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58410 00:05:29.234 killing process with pid 58410 00:05:29.234 09:43:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:29.234 09:43:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:29.234 09:43:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58410' 00:05:29.234 09:43:22 -- common/autotest_common.sh@945 -- # kill 58410 00:05:29.234 09:43:22 -- common/autotest_common.sh@950 -- # wait 58410 00:05:31.139 ************************************ 00:05:31.139 END TEST default_locks_via_rpc 00:05:31.139 ************************************ 00:05:31.139 00:05:31.139 real 0m3.271s 00:05:31.139 user 0m3.431s 00:05:31.139 sys 0m0.539s 00:05:31.139 09:43:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.139 09:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.139 09:43:24 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:31.139 09:43:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.139 09:43:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.139 09:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.139 ************************************ 00:05:31.139 START TEST non_locking_app_on_locked_coremask 00:05:31.139 ************************************ 00:05:31.139 09:43:24 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:31.139 09:43:24 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58473 00:05:31.139 09:43:24 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.139 09:43:24 -- event/cpu_locks.sh@81 -- # waitforlisten 58473 /var/tmp/spdk.sock 00:05:31.139 09:43:24 -- common/autotest_common.sh@819 -- # '[' -z 58473 ']' 00:05:31.139 09:43:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.139 09:43:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.139 09:43:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.139 09:43:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.139 09:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.139 [2024-06-10 09:43:24.697856] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:31.139 [2024-06-10 09:43:24.698304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58473 ] 00:05:31.139 [2024-06-10 09:43:24.860619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.398 [2024-06-10 09:43:25.015719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.398 [2024-06-10 09:43:25.015938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.773 09:43:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.773 09:43:26 -- common/autotest_common.sh@852 -- # return 0 00:05:32.773 09:43:26 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58502 00:05:32.773 09:43:26 -- event/cpu_locks.sh@85 -- # waitforlisten 58502 /var/tmp/spdk2.sock 00:05:32.773 09:43:26 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.773 09:43:26 -- common/autotest_common.sh@819 -- # '[' -z 58502 ']' 00:05:32.773 09:43:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.773 09:43:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.773 09:43:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.773 09:43:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.773 09:43:26 -- common/autotest_common.sh@10 -- # set +x 00:05:32.773 [2024-06-10 09:43:26.370894] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:32.773 [2024-06-10 09:43:26.371825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58502 ] 00:05:33.031 [2024-06-10 09:43:26.549401] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.031 [2024-06-10 09:43:26.549468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.289 [2024-06-10 09:43:26.886992] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.289 [2024-06-10 09:43:26.887260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.189 09:43:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.189 09:43:28 -- common/autotest_common.sh@852 -- # return 0 00:05:35.189 09:43:28 -- event/cpu_locks.sh@87 -- # locks_exist 58473 00:05:35.189 09:43:28 -- event/cpu_locks.sh@22 -- # lslocks -p 58473 00:05:35.189 09:43:28 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.755 09:43:29 -- event/cpu_locks.sh@89 -- # killprocess 58473 00:05:35.755 09:43:29 -- common/autotest_common.sh@926 -- # '[' -z 58473 ']' 00:05:35.755 09:43:29 -- common/autotest_common.sh@930 -- # kill -0 58473 00:05:35.755 09:43:29 -- common/autotest_common.sh@931 -- # uname 00:05:35.755 09:43:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.755 09:43:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58473 00:05:35.755 09:43:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.755 09:43:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.755 killing process with pid 58473 00:05:35.755 09:43:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58473' 00:05:35.755 09:43:29 -- common/autotest_common.sh@945 -- # kill 58473 00:05:35.755 09:43:29 -- common/autotest_common.sh@950 -- # wait 58473 00:05:39.994 09:43:32 -- event/cpu_locks.sh@90 -- # killprocess 58502 00:05:39.994 09:43:32 -- common/autotest_common.sh@926 -- # '[' -z 58502 ']' 00:05:39.994 09:43:32 -- common/autotest_common.sh@930 -- # kill -0 58502 00:05:39.994 09:43:32 -- common/autotest_common.sh@931 -- # uname 00:05:39.994 09:43:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:39.994 09:43:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58502 00:05:39.994 killing process with pid 58502 00:05:39.994 09:43:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:39.994 09:43:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:39.994 09:43:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58502' 00:05:39.994 09:43:32 -- common/autotest_common.sh@945 -- # kill 58502 00:05:39.994 09:43:32 -- common/autotest_common.sh@950 -- # wait 58502 00:05:41.373 ************************************ 00:05:41.373 END TEST non_locking_app_on_locked_coremask 00:05:41.373 ************************************ 00:05:41.373 00:05:41.373 real 0m10.161s 00:05:41.373 user 0m11.071s 00:05:41.373 sys 0m1.155s 00:05:41.373 09:43:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.373 09:43:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.373 09:43:34 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:41.373 09:43:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.373 09:43:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.373 09:43:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.373 ************************************ 00:05:41.373 START TEST locking_app_on_unlocked_coremask 00:05:41.373 ************************************ 00:05:41.373 09:43:34 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:41.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.373 09:43:34 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58630 00:05:41.373 09:43:34 -- event/cpu_locks.sh@99 -- # waitforlisten 58630 /var/tmp/spdk.sock 00:05:41.373 09:43:34 -- common/autotest_common.sh@819 -- # '[' -z 58630 ']' 00:05:41.373 09:43:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.373 09:43:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.373 09:43:34 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:41.373 09:43:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.373 09:43:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.373 09:43:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.373 [2024-06-10 09:43:34.903056] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:41.373 [2024-06-10 09:43:34.903966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58630 ] 00:05:41.373 [2024-06-10 09:43:35.075551] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.373 [2024-06-10 09:43:35.075604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.632 [2024-06-10 09:43:35.233985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.632 [2024-06-10 09:43:35.234303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.010 09:43:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.010 09:43:36 -- common/autotest_common.sh@852 -- # return 0 00:05:43.010 09:43:36 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.010 09:43:36 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58654 00:05:43.010 09:43:36 -- event/cpu_locks.sh@103 -- # waitforlisten 58654 /var/tmp/spdk2.sock 00:05:43.010 09:43:36 -- common/autotest_common.sh@819 -- # '[' -z 58654 ']' 00:05:43.010 09:43:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.010 09:43:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.010 09:43:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.010 09:43:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.010 09:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.010 [2024-06-10 09:43:36.551045] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:43.010 [2024-06-10 09:43:36.551474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58654 ] 00:05:43.010 [2024-06-10 09:43:36.713954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.269 [2024-06-10 09:43:37.025927] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.269 [2024-06-10 09:43:37.026205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.175 09:43:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.175 09:43:38 -- common/autotest_common.sh@852 -- # return 0 00:05:45.175 09:43:38 -- event/cpu_locks.sh@105 -- # locks_exist 58654 00:05:45.175 09:43:38 -- event/cpu_locks.sh@22 -- # lslocks -p 58654 00:05:45.175 09:43:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.113 09:43:39 -- event/cpu_locks.sh@107 -- # killprocess 58630 00:05:46.113 09:43:39 -- common/autotest_common.sh@926 -- # '[' -z 58630 ']' 00:05:46.113 09:43:39 -- common/autotest_common.sh@930 -- # kill -0 58630 00:05:46.113 09:43:39 -- common/autotest_common.sh@931 -- # uname 00:05:46.113 09:43:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.113 09:43:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58630 00:05:46.113 09:43:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.113 09:43:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.113 killing process with pid 58630 00:05:46.113 09:43:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58630' 00:05:46.113 09:43:39 -- common/autotest_common.sh@945 -- # kill 58630 00:05:46.113 09:43:39 -- common/autotest_common.sh@950 -- # wait 58630 00:05:50.303 09:43:43 -- event/cpu_locks.sh@108 -- # killprocess 58654 00:05:50.303 09:43:43 -- common/autotest_common.sh@926 -- # '[' -z 58654 ']' 00:05:50.303 09:43:43 -- common/autotest_common.sh@930 -- # kill -0 58654 00:05:50.303 09:43:43 -- common/autotest_common.sh@931 -- # uname 00:05:50.303 09:43:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:50.303 09:43:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58654 00:05:50.303 killing process with pid 58654 00:05:50.303 09:43:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:50.303 09:43:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:50.303 09:43:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58654' 00:05:50.303 09:43:43 -- common/autotest_common.sh@945 -- # kill 58654 00:05:50.303 09:43:43 -- common/autotest_common.sh@950 -- # wait 58654 00:05:51.240 00:05:51.240 real 0m10.121s 00:05:51.240 user 0m11.039s 00:05:51.240 sys 0m1.230s 00:05:51.240 09:43:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.240 09:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:51.240 ************************************ 00:05:51.240 END TEST locking_app_on_unlocked_coremask 00:05:51.240 ************************************ 00:05:51.240 09:43:44 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.240 09:43:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.240 09:43:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.240 09:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:51.240 ************************************ 00:05:51.240 START TEST locking_app_on_locked_coremask 00:05:51.240 ************************************ 00:05:51.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.240 09:43:44 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:51.240 09:43:44 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58784 00:05:51.240 09:43:44 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.240 09:43:44 -- event/cpu_locks.sh@116 -- # waitforlisten 58784 /var/tmp/spdk.sock 00:05:51.240 09:43:44 -- common/autotest_common.sh@819 -- # '[' -z 58784 ']' 00:05:51.240 09:43:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.240 09:43:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.240 09:43:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.240 09:43:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.240 09:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:51.497 [2024-06-10 09:43:45.089056] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:51.497 [2024-06-10 09:43:45.089458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58784 ] 00:05:51.497 [2024-06-10 09:43:45.256663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.755 [2024-06-10 09:43:45.405730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.755 [2024-06-10 09:43:45.406228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.131 09:43:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.131 09:43:46 -- common/autotest_common.sh@852 -- # return 0 00:05:53.131 09:43:46 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58813 00:05:53.131 09:43:46 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.132 09:43:46 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58813 /var/tmp/spdk2.sock 00:05:53.132 09:43:46 -- common/autotest_common.sh@640 -- # local es=0 00:05:53.132 09:43:46 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58813 /var/tmp/spdk2.sock 00:05:53.132 09:43:46 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:53.132 09:43:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.132 09:43:46 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:53.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.132 09:43:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.132 09:43:46 -- common/autotest_common.sh@643 -- # waitforlisten 58813 /var/tmp/spdk2.sock 00:05:53.132 09:43:46 -- common/autotest_common.sh@819 -- # '[' -z 58813 ']' 00:05:53.132 09:43:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.132 09:43:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.132 09:43:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.132 09:43:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.132 09:43:46 -- common/autotest_common.sh@10 -- # set +x 00:05:53.132 [2024-06-10 09:43:46.747443] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:53.132 [2024-06-10 09:43:46.747858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58813 ] 00:05:53.390 [2024-06-10 09:43:46.921537] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58784 has claimed it. 00:05:53.390 [2024-06-10 09:43:46.921628] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.649 ERROR: process (pid: 58813) is no longer running 00:05:53.649 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58813) - No such process 00:05:53.649 09:43:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.649 09:43:47 -- common/autotest_common.sh@852 -- # return 1 00:05:53.649 09:43:47 -- common/autotest_common.sh@643 -- # es=1 00:05:53.649 09:43:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:53.649 09:43:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:53.649 09:43:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:53.649 09:43:47 -- event/cpu_locks.sh@122 -- # locks_exist 58784 00:05:53.649 09:43:47 -- event/cpu_locks.sh@22 -- # lslocks -p 58784 00:05:53.649 09:43:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.216 09:43:47 -- event/cpu_locks.sh@124 -- # killprocess 58784 00:05:54.216 09:43:47 -- common/autotest_common.sh@926 -- # '[' -z 58784 ']' 00:05:54.216 09:43:47 -- common/autotest_common.sh@930 -- # kill -0 58784 00:05:54.216 09:43:47 -- common/autotest_common.sh@931 -- # uname 00:05:54.216 09:43:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.216 09:43:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58784 00:05:54.216 killing process with pid 58784 00:05:54.216 09:43:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.216 09:43:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.216 09:43:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58784' 00:05:54.216 09:43:47 -- common/autotest_common.sh@945 -- # kill 58784 00:05:54.216 09:43:47 -- common/autotest_common.sh@950 -- # wait 58784 00:05:56.121 ************************************ 00:05:56.121 END TEST locking_app_on_locked_coremask 00:05:56.121 ************************************ 00:05:56.121 00:05:56.121 real 0m4.605s 00:05:56.121 user 0m5.131s 00:05:56.121 sys 0m0.732s 00:05:56.121 09:43:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.121 09:43:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.121 09:43:49 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.121 09:43:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.121 09:43:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.121 09:43:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.121 ************************************ 00:05:56.121 START TEST locking_overlapped_coremask 00:05:56.121 ************************************ 00:05:56.121 09:43:49 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:56.121 09:43:49 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58880 00:05:56.121 09:43:49 -- event/cpu_locks.sh@133 -- # waitforlisten 58880 /var/tmp/spdk.sock 00:05:56.121 09:43:49 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.121 09:43:49 -- common/autotest_common.sh@819 -- # '[' -z 58880 ']' 00:05:56.121 09:43:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.121 09:43:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.121 09:43:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.121 09:43:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.121 09:43:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.121 [2024-06-10 09:43:49.743027] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:56.121 [2024-06-10 09:43:49.743225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58880 ] 00:05:56.380 [2024-06-10 09:43:49.907864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.380 [2024-06-10 09:43:50.062531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.380 [2024-06-10 09:43:50.062910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.380 [2024-06-10 09:43:50.063207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.380 [2024-06-10 09:43:50.063213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.756 09:43:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.756 09:43:51 -- common/autotest_common.sh@852 -- # return 0 00:05:57.756 09:43:51 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.756 09:43:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58900 00:05:57.756 09:43:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58900 /var/tmp/spdk2.sock 00:05:57.756 09:43:51 -- common/autotest_common.sh@640 -- # local es=0 00:05:57.757 09:43:51 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58900 /var/tmp/spdk2.sock 00:05:57.757 09:43:51 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:57.757 09:43:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.757 09:43:51 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:57.757 09:43:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.757 09:43:51 -- common/autotest_common.sh@643 -- # waitforlisten 58900 /var/tmp/spdk2.sock 00:05:57.757 09:43:51 -- common/autotest_common.sh@819 -- # '[' -z 58900 ']' 00:05:57.757 09:43:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.757 09:43:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.757 09:43:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.757 09:43:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.757 09:43:51 -- common/autotest_common.sh@10 -- # set +x 00:05:57.757 [2024-06-10 09:43:51.429173] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:57.757 [2024-06-10 09:43:51.429351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58900 ] 00:05:58.016 [2024-06-10 09:43:51.605064] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58880 has claimed it. 00:05:58.016 [2024-06-10 09:43:51.605171] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.583 ERROR: process (pid: 58900) is no longer running 00:05:58.583 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58900) - No such process 00:05:58.583 09:43:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.583 09:43:52 -- common/autotest_common.sh@852 -- # return 1 00:05:58.583 09:43:52 -- common/autotest_common.sh@643 -- # es=1 00:05:58.583 09:43:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:58.583 09:43:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:58.583 09:43:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:58.583 09:43:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.583 09:43:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.583 09:43:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.583 09:43:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.583 09:43:52 -- event/cpu_locks.sh@141 -- # killprocess 58880 00:05:58.583 09:43:52 -- common/autotest_common.sh@926 -- # '[' -z 58880 ']' 00:05:58.583 09:43:52 -- common/autotest_common.sh@930 -- # kill -0 58880 00:05:58.583 09:43:52 -- common/autotest_common.sh@931 -- # uname 00:05:58.583 09:43:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:58.583 09:43:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58880 00:05:58.583 09:43:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:58.583 09:43:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:58.583 09:43:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58880' 00:05:58.583 killing process with pid 58880 00:05:58.583 09:43:52 -- common/autotest_common.sh@945 -- # kill 58880 00:05:58.583 09:43:52 -- common/autotest_common.sh@950 -- # wait 58880 00:06:00.487 00:06:00.487 real 0m4.291s 00:06:00.487 user 0m11.752s 00:06:00.487 sys 0m0.541s 00:06:00.487 09:43:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.487 09:43:53 -- common/autotest_common.sh@10 -- # set +x 00:06:00.487 ************************************ 00:06:00.487 END TEST locking_overlapped_coremask 00:06:00.487 ************************************ 00:06:00.487 09:43:53 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:00.487 09:43:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.487 09:43:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.487 09:43:53 -- common/autotest_common.sh@10 -- # set +x 00:06:00.487 ************************************ 00:06:00.487 START TEST locking_overlapped_coremask_via_rpc 00:06:00.487 ************************************ 00:06:00.487 09:43:53 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:00.487 09:43:53 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58964 00:06:00.487 09:43:53 -- event/cpu_locks.sh@149 -- # waitforlisten 58964 /var/tmp/spdk.sock 00:06:00.487 09:43:53 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:00.487 09:43:53 -- common/autotest_common.sh@819 -- # '[' -z 58964 ']' 00:06:00.487 09:43:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.487 09:43:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.487 09:43:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.487 09:43:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.487 09:43:53 -- common/autotest_common.sh@10 -- # set +x 00:06:00.487 [2024-06-10 09:43:54.058680] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:00.487 [2024-06-10 09:43:54.058886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58964 ] 00:06:00.487 [2024-06-10 09:43:54.214849] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.487 [2024-06-10 09:43:54.214894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.747 [2024-06-10 09:43:54.369542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.747 [2024-06-10 09:43:54.369909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.747 [2024-06-10 09:43:54.370200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.747 [2024-06-10 09:43:54.370201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.125 09:43:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.125 09:43:55 -- common/autotest_common.sh@852 -- # return 0 00:06:02.125 09:43:55 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58994 00:06:02.125 09:43:55 -- event/cpu_locks.sh@153 -- # waitforlisten 58994 /var/tmp/spdk2.sock 00:06:02.125 09:43:55 -- common/autotest_common.sh@819 -- # '[' -z 58994 ']' 00:06:02.125 09:43:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.125 09:43:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.125 09:43:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.125 09:43:55 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:02.125 09:43:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.125 09:43:55 -- common/autotest_common.sh@10 -- # set +x 00:06:02.125 [2024-06-10 09:43:55.806069] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:02.125 [2024-06-10 09:43:55.806282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58994 ] 00:06:02.385 [2024-06-10 09:43:55.982590] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.385 [2024-06-10 09:43:55.982686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.644 [2024-06-10 09:43:56.322126] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.644 [2024-06-10 09:43:56.322580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.644 [2024-06-10 09:43:56.326301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.644 [2024-06-10 09:43:56.326314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:04.550 09:43:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.550 09:43:58 -- common/autotest_common.sh@852 -- # return 0 00:06:04.550 09:43:58 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:04.550 09:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.550 09:43:58 -- common/autotest_common.sh@10 -- # set +x 00:06:04.550 09:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.550 09:43:58 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.550 09:43:58 -- common/autotest_common.sh@640 -- # local es=0 00:06:04.550 09:43:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.550 09:43:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:04.550 09:43:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:04.550 09:43:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:04.550 09:43:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:04.550 09:43:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.550 09:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.550 09:43:58 -- common/autotest_common.sh@10 -- # set +x 00:06:04.550 [2024-06-10 09:43:58.131389] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58964 has claimed it. 00:06:04.550 request: 00:06:04.550 { 00:06:04.550 "method": "framework_enable_cpumask_locks", 00:06:04.550 "req_id": 1 00:06:04.550 } 00:06:04.550 Got JSON-RPC error response 00:06:04.550 response: 00:06:04.550 { 00:06:04.550 "code": -32603, 00:06:04.550 "message": "Failed to claim CPU core: 2" 00:06:04.550 } 00:06:04.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.550 09:43:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:04.550 09:43:58 -- common/autotest_common.sh@643 -- # es=1 00:06:04.550 09:43:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:04.550 09:43:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:04.550 09:43:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:04.550 09:43:58 -- event/cpu_locks.sh@158 -- # waitforlisten 58964 /var/tmp/spdk.sock 00:06:04.550 09:43:58 -- common/autotest_common.sh@819 -- # '[' -z 58964 ']' 00:06:04.550 09:43:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.550 09:43:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.550 09:43:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.550 09:43:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.550 09:43:58 -- common/autotest_common.sh@10 -- # set +x 00:06:04.809 09:43:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.809 09:43:58 -- common/autotest_common.sh@852 -- # return 0 00:06:04.809 09:43:58 -- event/cpu_locks.sh@159 -- # waitforlisten 58994 /var/tmp/spdk2.sock 00:06:04.809 09:43:58 -- common/autotest_common.sh@819 -- # '[' -z 58994 ']' 00:06:04.809 09:43:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.809 09:43:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.809 09:43:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.809 09:43:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.810 09:43:58 -- common/autotest_common.sh@10 -- # set +x 00:06:05.068 09:43:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.068 09:43:58 -- common/autotest_common.sh@852 -- # return 0 00:06:05.068 09:43:58 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.068 09:43:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.068 09:43:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.068 09:43:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.068 00:06:05.068 real 0m4.639s 00:06:05.068 user 0m1.811s 00:06:05.068 sys 0m0.248s 00:06:05.068 ************************************ 00:06:05.068 END TEST locking_overlapped_coremask_via_rpc 00:06:05.068 ************************************ 00:06:05.068 09:43:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.068 09:43:58 -- common/autotest_common.sh@10 -- # set +x 00:06:05.068 09:43:58 -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.068 09:43:58 -- event/cpu_locks.sh@15 -- # [[ -z 58964 ]] 00:06:05.068 09:43:58 -- event/cpu_locks.sh@15 -- # killprocess 58964 00:06:05.068 09:43:58 -- common/autotest_common.sh@926 -- # '[' -z 58964 ']' 00:06:05.068 09:43:58 -- common/autotest_common.sh@930 -- # kill -0 58964 00:06:05.068 09:43:58 -- common/autotest_common.sh@931 -- # uname 00:06:05.068 09:43:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.068 09:43:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58964 00:06:05.068 killing process with pid 58964 00:06:05.068 09:43:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.068 09:43:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.069 09:43:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58964' 00:06:05.069 09:43:58 -- common/autotest_common.sh@945 -- # kill 58964 00:06:05.069 09:43:58 -- common/autotest_common.sh@950 -- # wait 58964 00:06:06.973 09:44:00 -- event/cpu_locks.sh@16 -- # [[ -z 58994 ]] 00:06:06.973 09:44:00 -- event/cpu_locks.sh@16 -- # killprocess 58994 00:06:06.973 09:44:00 -- common/autotest_common.sh@926 -- # '[' -z 58994 ']' 00:06:06.973 09:44:00 -- common/autotest_common.sh@930 -- # kill -0 58994 00:06:06.973 09:44:00 -- common/autotest_common.sh@931 -- # uname 00:06:06.973 09:44:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:06.973 09:44:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58994 00:06:06.973 killing process with pid 58994 00:06:06.973 09:44:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:06.973 09:44:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:06.973 09:44:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58994' 00:06:06.973 09:44:00 -- common/autotest_common.sh@945 -- # kill 58994 00:06:06.973 09:44:00 -- common/autotest_common.sh@950 -- # wait 58994 00:06:08.876 09:44:02 -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.876 09:44:02 -- event/cpu_locks.sh@1 -- # cleanup 00:06:08.876 09:44:02 -- event/cpu_locks.sh@15 -- # [[ -z 58964 ]] 00:06:08.876 09:44:02 -- event/cpu_locks.sh@15 -- # killprocess 58964 00:06:08.876 09:44:02 -- common/autotest_common.sh@926 -- # '[' -z 58964 ']' 00:06:08.876 09:44:02 -- common/autotest_common.sh@930 -- # kill -0 58964 00:06:08.876 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58964) - No such process 00:06:08.876 Process with pid 58964 is not found 00:06:08.876 Process with pid 58994 is not found 00:06:08.876 09:44:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58964 is not found' 00:06:08.876 09:44:02 -- event/cpu_locks.sh@16 -- # [[ -z 58994 ]] 00:06:08.876 09:44:02 -- event/cpu_locks.sh@16 -- # killprocess 58994 00:06:08.876 09:44:02 -- common/autotest_common.sh@926 -- # '[' -z 58994 ']' 00:06:08.876 09:44:02 -- common/autotest_common.sh@930 -- # kill -0 58994 00:06:08.876 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58994) - No such process 00:06:08.876 09:44:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58994 is not found' 00:06:08.876 09:44:02 -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.876 00:06:08.876 real 0m45.329s 00:06:08.876 user 1m20.035s 00:06:08.876 sys 0m5.989s 00:06:08.876 09:44:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.876 ************************************ 00:06:08.876 09:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:08.876 END TEST cpu_locks 00:06:08.876 ************************************ 00:06:08.876 ************************************ 00:06:08.876 END TEST event 00:06:08.876 ************************************ 00:06:08.876 00:06:08.876 real 1m15.597s 00:06:08.876 user 2m18.914s 00:06:08.876 sys 0m9.437s 00:06:08.876 09:44:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.876 09:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:08.876 09:44:02 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:08.876 09:44:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.876 09:44:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.876 09:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:08.876 ************************************ 00:06:08.876 START TEST thread 00:06:08.876 ************************************ 00:06:08.876 09:44:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:08.876 * Looking for test storage... 00:06:08.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:08.876 09:44:02 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.876 09:44:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:08.876 09:44:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.876 09:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:08.876 ************************************ 00:06:08.876 START TEST thread_poller_perf 00:06:08.876 ************************************ 00:06:08.876 09:44:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.135 [2024-06-10 09:44:02.660838] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:09.135 [2024-06-10 09:44:02.661570] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59161 ] 00:06:09.135 [2024-06-10 09:44:02.818848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.394 [2024-06-10 09:44:03.040087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.394 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:10.774 ====================================== 00:06:10.774 busy:2208952301 (cyc) 00:06:10.774 total_run_count: 332000 00:06:10.774 tsc_hz: 2200000000 (cyc) 00:06:10.774 ====================================== 00:06:10.774 poller_cost: 6653 (cyc), 3024 (nsec) 00:06:10.774 00:06:10.774 real 0m1.717s 00:06:10.774 user 0m1.519s 00:06:10.774 sys 0m0.088s 00:06:10.774 ************************************ 00:06:10.774 END TEST thread_poller_perf 00:06:10.774 ************************************ 00:06:10.774 09:44:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.774 09:44:04 -- common/autotest_common.sh@10 -- # set +x 00:06:10.774 09:44:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.774 09:44:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:10.774 09:44:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.774 09:44:04 -- common/autotest_common.sh@10 -- # set +x 00:06:10.774 ************************************ 00:06:10.774 START TEST thread_poller_perf 00:06:10.774 ************************************ 00:06:10.774 09:44:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.774 [2024-06-10 09:44:04.447011] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:10.774 [2024-06-10 09:44:04.447184] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59203 ] 00:06:11.034 [2024-06-10 09:44:04.635517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.034 [2024-06-10 09:44:04.788548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.034 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:12.413 ====================================== 00:06:12.413 busy:2204832970 (cyc) 00:06:12.413 total_run_count: 4496000 00:06:12.413 tsc_hz: 2200000000 (cyc) 00:06:12.413 ====================================== 00:06:12.413 poller_cost: 490 (cyc), 222 (nsec) 00:06:12.413 00:06:12.413 real 0m1.727s 00:06:12.413 user 0m1.503s 00:06:12.413 sys 0m0.114s 00:06:12.413 09:44:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.413 ************************************ 00:06:12.413 END TEST thread_poller_perf 00:06:12.413 ************************************ 00:06:12.413 09:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:12.413 09:44:06 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.413 00:06:12.413 real 0m3.635s 00:06:12.413 user 0m3.083s 00:06:12.413 sys 0m0.319s 00:06:12.413 09:44:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.413 ************************************ 00:06:12.413 END TEST thread 00:06:12.413 ************************************ 00:06:12.413 09:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:12.672 09:44:06 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:12.672 09:44:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.672 09:44:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.672 09:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:12.672 ************************************ 00:06:12.672 START TEST accel 00:06:12.672 ************************************ 00:06:12.672 09:44:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:12.672 * Looking for test storage... 00:06:12.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:12.672 09:44:06 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:12.672 09:44:06 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:12.672 09:44:06 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.672 09:44:06 -- accel/accel.sh@59 -- # spdk_tgt_pid=59283 00:06:12.672 09:44:06 -- accel/accel.sh@60 -- # waitforlisten 59283 00:06:12.672 09:44:06 -- common/autotest_common.sh@819 -- # '[' -z 59283 ']' 00:06:12.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.672 09:44:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.672 09:44:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.672 09:44:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.672 09:44:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.672 09:44:06 -- accel/accel.sh@58 -- # build_accel_config 00:06:12.672 09:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:12.672 09:44:06 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:12.672 09:44:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.672 09:44:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.672 09:44:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.672 09:44:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.672 09:44:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.672 09:44:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.672 09:44:06 -- accel/accel.sh@42 -- # jq -r . 00:06:12.672 [2024-06-10 09:44:06.420383] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:12.672 [2024-06-10 09:44:06.420584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59283 ] 00:06:12.931 [2024-06-10 09:44:06.582714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.190 [2024-06-10 09:44:06.765928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.190 [2024-06-10 09:44:06.766172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.569 09:44:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.569 09:44:08 -- common/autotest_common.sh@852 -- # return 0 00:06:14.569 09:44:08 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:14.569 09:44:08 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:14.569 09:44:08 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:14.569 09:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:14.569 09:44:08 -- common/autotest_common.sh@10 -- # set +x 00:06:14.569 09:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # IFS== 00:06:14.569 09:44:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.569 09:44:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.569 09:44:08 -- accel/accel.sh@67 -- # killprocess 59283 00:06:14.569 09:44:08 -- common/autotest_common.sh@926 -- # '[' -z 59283 ']' 00:06:14.569 09:44:08 -- common/autotest_common.sh@930 -- # kill -0 59283 00:06:14.569 09:44:08 -- common/autotest_common.sh@931 -- # uname 00:06:14.569 09:44:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:14.569 09:44:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59283 00:06:14.569 09:44:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:14.569 killing process with pid 59283 00:06:14.569 09:44:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:14.569 09:44:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59283' 00:06:14.569 09:44:08 -- common/autotest_common.sh@945 -- # kill 59283 00:06:14.569 09:44:08 -- common/autotest_common.sh@950 -- # wait 59283 00:06:16.474 09:44:09 -- accel/accel.sh@68 -- # trap - ERR 00:06:16.474 09:44:09 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:16.474 09:44:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:16.474 09:44:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.474 09:44:09 -- common/autotest_common.sh@10 -- # set +x 00:06:16.474 09:44:09 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:16.474 09:44:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:16.474 09:44:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.474 09:44:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.474 09:44:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.474 09:44:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.474 09:44:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.474 09:44:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.474 09:44:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.474 09:44:09 -- accel/accel.sh@42 -- # jq -r . 00:06:16.474 09:44:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.474 09:44:09 -- common/autotest_common.sh@10 -- # set +x 00:06:16.474 09:44:09 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:16.474 09:44:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:16.474 09:44:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.474 09:44:09 -- common/autotest_common.sh@10 -- # set +x 00:06:16.474 ************************************ 00:06:16.474 START TEST accel_missing_filename 00:06:16.474 ************************************ 00:06:16.474 09:44:09 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:16.474 09:44:09 -- common/autotest_common.sh@640 -- # local es=0 00:06:16.474 09:44:09 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:16.474 09:44:09 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:16.474 09:44:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.474 09:44:09 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:16.474 09:44:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.474 09:44:09 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:16.474 09:44:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:16.474 09:44:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.474 09:44:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.474 09:44:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.474 09:44:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.474 09:44:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.474 09:44:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.474 09:44:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.474 09:44:09 -- accel/accel.sh@42 -- # jq -r . 00:06:16.474 [2024-06-10 09:44:09.994536] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:16.474 [2024-06-10 09:44:09.994715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59360 ] 00:06:16.474 [2024-06-10 09:44:10.163938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.733 [2024-06-10 09:44:10.318082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.733 [2024-06-10 09:44:10.466366] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.301 [2024-06-10 09:44:10.849008] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:17.560 A filename is required. 00:06:17.560 09:44:11 -- common/autotest_common.sh@643 -- # es=234 00:06:17.560 09:44:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.560 09:44:11 -- common/autotest_common.sh@652 -- # es=106 00:06:17.560 09:44:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:17.560 09:44:11 -- common/autotest_common.sh@660 -- # es=1 00:06:17.560 09:44:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.560 00:06:17.560 real 0m1.208s 00:06:17.560 user 0m0.997s 00:06:17.560 sys 0m0.157s 00:06:17.560 09:44:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.560 09:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:17.560 ************************************ 00:06:17.560 END TEST accel_missing_filename 00:06:17.560 ************************************ 00:06:17.560 09:44:11 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.560 09:44:11 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:17.560 09:44:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.560 09:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:17.560 ************************************ 00:06:17.560 START TEST accel_compress_verify 00:06:17.560 ************************************ 00:06:17.560 09:44:11 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.560 09:44:11 -- common/autotest_common.sh@640 -- # local es=0 00:06:17.560 09:44:11 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.560 09:44:11 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:17.560 09:44:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.560 09:44:11 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:17.560 09:44:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.560 09:44:11 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.560 09:44:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.560 09:44:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.560 09:44:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.560 09:44:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.560 09:44:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.560 09:44:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.560 09:44:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.560 09:44:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.560 09:44:11 -- accel/accel.sh@42 -- # jq -r . 00:06:17.560 [2024-06-10 09:44:11.251758] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:17.560 [2024-06-10 09:44:11.251908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:06:17.820 [2024-06-10 09:44:11.414149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.820 [2024-06-10 09:44:11.563267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.079 [2024-06-10 09:44:11.721385] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.338 [2024-06-10 09:44:12.101329] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:18.907 00:06:18.907 Compression does not support the verify option, aborting. 00:06:18.907 09:44:12 -- common/autotest_common.sh@643 -- # es=161 00:06:18.907 09:44:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:18.907 09:44:12 -- common/autotest_common.sh@652 -- # es=33 00:06:18.907 09:44:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:18.907 09:44:12 -- common/autotest_common.sh@660 -- # es=1 00:06:18.907 09:44:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:18.907 00:06:18.907 real 0m1.198s 00:06:18.907 user 0m1.003s 00:06:18.907 sys 0m0.139s 00:06:18.907 09:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.907 ************************************ 00:06:18.907 END TEST accel_compress_verify 00:06:18.907 ************************************ 00:06:18.907 09:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:18.907 09:44:12 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:18.907 09:44:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:18.907 09:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.907 09:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:18.907 ************************************ 00:06:18.907 START TEST accel_wrong_workload 00:06:18.907 ************************************ 00:06:18.907 09:44:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:18.907 09:44:12 -- common/autotest_common.sh@640 -- # local es=0 00:06:18.907 09:44:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:18.907 09:44:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:18.907 09:44:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:18.907 09:44:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:18.907 09:44:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:18.907 09:44:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:18.907 09:44:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:18.907 09:44:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.907 09:44:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.907 09:44:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.907 09:44:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.907 09:44:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.907 09:44:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.907 09:44:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.907 09:44:12 -- accel/accel.sh@42 -- # jq -r . 00:06:18.907 Unsupported workload type: foobar 00:06:18.907 [2024-06-10 09:44:12.496086] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:18.907 accel_perf options: 00:06:18.907 [-h help message] 00:06:18.907 [-q queue depth per core] 00:06:18.907 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.907 [-T number of threads per core 00:06:18.907 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.907 [-t time in seconds] 00:06:18.907 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.907 [ dif_verify, , dif_generate, dif_generate_copy 00:06:18.907 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.907 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.907 [-S for crc32c workload, use this seed value (default 0) 00:06:18.907 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.907 [-f for fill workload, use this BYTE value (default 255) 00:06:18.907 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.907 [-y verify result if this switch is on] 00:06:18.907 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.907 Can be used to spread operations across a wider range of memory. 00:06:18.907 09:44:12 -- common/autotest_common.sh@643 -- # es=1 00:06:18.907 09:44:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:18.907 09:44:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:18.907 09:44:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:18.907 00:06:18.907 real 0m0.076s 00:06:18.907 user 0m0.079s 00:06:18.907 sys 0m0.046s 00:06:18.907 09:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.907 ************************************ 00:06:18.907 END TEST accel_wrong_workload 00:06:18.907 ************************************ 00:06:18.907 09:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:18.907 09:44:12 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.907 09:44:12 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:18.907 09:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.907 09:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:18.907 ************************************ 00:06:18.907 START TEST accel_negative_buffers 00:06:18.907 ************************************ 00:06:18.907 09:44:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.907 09:44:12 -- common/autotest_common.sh@640 -- # local es=0 00:06:18.907 09:44:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:18.907 09:44:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:18.907 09:44:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:18.907 09:44:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:18.907 09:44:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:18.907 09:44:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:18.907 09:44:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:18.907 09:44:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.907 09:44:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.907 09:44:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.907 09:44:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.907 09:44:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.907 09:44:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.907 09:44:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.907 09:44:12 -- accel/accel.sh@42 -- # jq -r . 00:06:18.907 -x option must be non-negative. 00:06:18.907 [2024-06-10 09:44:12.620167] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:18.907 accel_perf options: 00:06:18.907 [-h help message] 00:06:18.907 [-q queue depth per core] 00:06:18.908 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.908 [-T number of threads per core 00:06:18.908 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.908 [-t time in seconds] 00:06:18.908 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.908 [ dif_verify, , dif_generate, dif_generate_copy 00:06:18.908 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.908 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.908 [-S for crc32c workload, use this seed value (default 0) 00:06:18.908 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.908 [-f for fill workload, use this BYTE value (default 255) 00:06:18.908 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.908 [-y verify result if this switch is on] 00:06:18.908 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.908 Can be used to spread operations across a wider range of memory. 00:06:18.908 ************************************ 00:06:18.908 END TEST accel_negative_buffers 00:06:18.908 ************************************ 00:06:18.908 09:44:12 -- common/autotest_common.sh@643 -- # es=1 00:06:18.908 09:44:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:18.908 09:44:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:18.908 09:44:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:18.908 00:06:18.908 real 0m0.078s 00:06:18.908 user 0m0.091s 00:06:18.908 sys 0m0.032s 00:06:18.908 09:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.908 09:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:19.168 09:44:12 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:19.168 09:44:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:19.168 09:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.168 09:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:19.168 ************************************ 00:06:19.168 START TEST accel_crc32c 00:06:19.168 ************************************ 00:06:19.168 09:44:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:19.168 09:44:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.168 09:44:12 -- accel/accel.sh@17 -- # local accel_module 00:06:19.168 09:44:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:19.168 09:44:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:19.168 09:44:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.168 09:44:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.168 09:44:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.168 09:44:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.168 09:44:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.168 09:44:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.168 09:44:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.168 09:44:12 -- accel/accel.sh@42 -- # jq -r . 00:06:19.168 [2024-06-10 09:44:12.753101] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:19.168 [2024-06-10 09:44:12.753319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59464 ] 00:06:19.168 [2024-06-10 09:44:12.922283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.431 [2024-06-10 09:44:13.074448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.341 09:44:14 -- accel/accel.sh@18 -- # out=' 00:06:21.341 SPDK Configuration: 00:06:21.341 Core mask: 0x1 00:06:21.341 00:06:21.341 Accel Perf Configuration: 00:06:21.341 Workload Type: crc32c 00:06:21.341 CRC-32C seed: 32 00:06:21.341 Transfer size: 4096 bytes 00:06:21.341 Vector count 1 00:06:21.341 Module: software 00:06:21.341 Queue depth: 32 00:06:21.341 Allocate depth: 32 00:06:21.341 # threads/core: 1 00:06:21.341 Run time: 1 seconds 00:06:21.341 Verify: Yes 00:06:21.341 00:06:21.341 Running for 1 seconds... 00:06:21.341 00:06:21.341 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.341 ------------------------------------------------------------------------------------ 00:06:21.341 0,0 467040/s 1824 MiB/s 0 0 00:06:21.341 ==================================================================================== 00:06:21.341 Total 467040/s 1824 MiB/s 0 0' 00:06:21.341 09:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:21.341 09:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:21.341 09:44:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:21.341 09:44:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:21.341 09:44:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.341 09:44:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.341 09:44:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.341 09:44:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.341 09:44:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.341 09:44:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.341 09:44:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.341 09:44:14 -- accel/accel.sh@42 -- # jq -r . 00:06:21.341 [2024-06-10 09:44:14.951801] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:21.341 [2024-06-10 09:44:14.951975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59490 ] 00:06:21.600 [2024-06-10 09:44:15.120236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.600 [2024-06-10 09:44:15.265883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val=0x1 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val=crc32c 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val=32 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val=software 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val=32 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val=32 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val=1 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val=Yes 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:21.859 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:21.859 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:21.859 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:23.764 09:44:17 -- accel/accel.sh@21 -- # val= 00:06:23.764 09:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.764 09:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:23.764 09:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:23.764 09:44:17 -- accel/accel.sh@21 -- # val= 00:06:23.764 09:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.764 09:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:23.764 09:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:23.764 09:44:17 -- accel/accel.sh@21 -- # val= 00:06:23.764 09:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.764 09:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:23.764 09:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:23.764 09:44:17 -- accel/accel.sh@21 -- # val= 00:06:23.764 09:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.764 09:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:23.764 09:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:23.764 09:44:17 -- accel/accel.sh@21 -- # val= 00:06:23.765 09:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.765 09:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:23.765 09:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:23.765 09:44:17 -- accel/accel.sh@21 -- # val= 00:06:23.765 09:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.765 09:44:17 -- accel/accel.sh@20 -- # IFS=: 00:06:23.765 09:44:17 -- accel/accel.sh@20 -- # read -r var val 00:06:23.765 09:44:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.765 09:44:17 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:23.765 09:44:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.765 00:06:23.765 real 0m4.397s 00:06:23.765 user 0m3.919s 00:06:23.765 sys 0m0.273s 00:06:23.765 09:44:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.765 ************************************ 00:06:23.765 09:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:23.765 END TEST accel_crc32c 00:06:23.765 ************************************ 00:06:23.765 09:44:17 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:23.765 09:44:17 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:23.765 09:44:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.765 09:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:23.765 ************************************ 00:06:23.765 START TEST accel_crc32c_C2 00:06:23.765 ************************************ 00:06:23.765 09:44:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:23.765 09:44:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.765 09:44:17 -- accel/accel.sh@17 -- # local accel_module 00:06:23.765 09:44:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:23.765 09:44:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:23.765 09:44:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.765 09:44:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.765 09:44:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.765 09:44:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.765 09:44:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.765 09:44:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.765 09:44:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.765 09:44:17 -- accel/accel.sh@42 -- # jq -r . 00:06:23.765 [2024-06-10 09:44:17.206606] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:23.765 [2024-06-10 09:44:17.206770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59537 ] 00:06:23.765 [2024-06-10 09:44:17.375040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.765 [2024-06-10 09:44:17.517941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.672 09:44:19 -- accel/accel.sh@18 -- # out=' 00:06:25.672 SPDK Configuration: 00:06:25.672 Core mask: 0x1 00:06:25.672 00:06:25.672 Accel Perf Configuration: 00:06:25.672 Workload Type: crc32c 00:06:25.672 CRC-32C seed: 0 00:06:25.672 Transfer size: 4096 bytes 00:06:25.672 Vector count 2 00:06:25.672 Module: software 00:06:25.672 Queue depth: 32 00:06:25.672 Allocate depth: 32 00:06:25.672 # threads/core: 1 00:06:25.672 Run time: 1 seconds 00:06:25.672 Verify: Yes 00:06:25.672 00:06:25.672 Running for 1 seconds... 00:06:25.672 00:06:25.672 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.672 ------------------------------------------------------------------------------------ 00:06:25.672 0,0 357376/s 2792 MiB/s 0 0 00:06:25.672 ==================================================================================== 00:06:25.672 Total 357376/s 1396 MiB/s 0 0' 00:06:25.672 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:25.672 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:25.672 09:44:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:25.672 09:44:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:25.672 09:44:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.672 09:44:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.672 09:44:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.672 09:44:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.672 09:44:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.672 09:44:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.672 09:44:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.672 09:44:19 -- accel/accel.sh@42 -- # jq -r . 00:06:25.672 [2024-06-10 09:44:19.421881] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:25.672 [2024-06-10 09:44:19.422035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59563 ] 00:06:25.931 [2024-06-10 09:44:19.589661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.190 [2024-06-10 09:44:19.736453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val=0x1 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val=crc32c 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val=0 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val=software 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val=32 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val=32 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val=1 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val=Yes 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:26.190 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:26.190 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:26.190 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:28.091 09:44:21 -- accel/accel.sh@21 -- # val= 00:06:28.091 09:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:28.091 09:44:21 -- accel/accel.sh@21 -- # val= 00:06:28.091 09:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:28.091 09:44:21 -- accel/accel.sh@21 -- # val= 00:06:28.091 09:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:28.091 09:44:21 -- accel/accel.sh@21 -- # val= 00:06:28.091 09:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:28.091 09:44:21 -- accel/accel.sh@21 -- # val= 00:06:28.091 09:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:28.091 09:44:21 -- accel/accel.sh@21 -- # val= 00:06:28.091 09:44:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # IFS=: 00:06:28.091 09:44:21 -- accel/accel.sh@20 -- # read -r var val 00:06:28.091 09:44:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.091 09:44:21 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:28.091 09:44:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.091 00:06:28.091 real 0m4.444s 00:06:28.091 user 0m3.969s 00:06:28.091 sys 0m0.269s 00:06:28.091 ************************************ 00:06:28.091 END TEST accel_crc32c_C2 00:06:28.091 ************************************ 00:06:28.091 09:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.091 09:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:28.091 09:44:21 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:28.091 09:44:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:28.091 09:44:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.091 09:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:28.091 ************************************ 00:06:28.091 START TEST accel_copy 00:06:28.091 ************************************ 00:06:28.091 09:44:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:28.091 09:44:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.091 09:44:21 -- accel/accel.sh@17 -- # local accel_module 00:06:28.091 09:44:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:28.091 09:44:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:28.091 09:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.091 09:44:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.091 09:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.091 09:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.091 09:44:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.091 09:44:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.091 09:44:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.091 09:44:21 -- accel/accel.sh@42 -- # jq -r . 00:06:28.091 [2024-06-10 09:44:21.697129] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:28.091 [2024-06-10 09:44:21.697292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59604 ] 00:06:28.349 [2024-06-10 09:44:21.863450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.349 [2024-06-10 09:44:22.032202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.249 09:44:23 -- accel/accel.sh@18 -- # out=' 00:06:30.249 SPDK Configuration: 00:06:30.249 Core mask: 0x1 00:06:30.249 00:06:30.249 Accel Perf Configuration: 00:06:30.249 Workload Type: copy 00:06:30.249 Transfer size: 4096 bytes 00:06:30.249 Vector count 1 00:06:30.249 Module: software 00:06:30.249 Queue depth: 32 00:06:30.249 Allocate depth: 32 00:06:30.249 # threads/core: 1 00:06:30.249 Run time: 1 seconds 00:06:30.249 Verify: Yes 00:06:30.249 00:06:30.249 Running for 1 seconds... 00:06:30.249 00:06:30.249 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.249 ------------------------------------------------------------------------------------ 00:06:30.249 0,0 270016/s 1054 MiB/s 0 0 00:06:30.249 ==================================================================================== 00:06:30.249 Total 270016/s 1054 MiB/s 0 0' 00:06:30.249 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:30.249 09:44:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:30.249 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:30.249 09:44:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:30.249 09:44:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.249 09:44:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.249 09:44:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.249 09:44:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.249 09:44:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.250 09:44:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.250 09:44:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.250 09:44:23 -- accel/accel.sh@42 -- # jq -r . 00:06:30.250 [2024-06-10 09:44:23.974814] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:30.250 [2024-06-10 09:44:23.974986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59630 ] 00:06:30.508 [2024-06-10 09:44:24.143739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.765 [2024-06-10 09:44:24.310014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.765 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.765 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.765 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.765 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.765 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.765 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.765 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.765 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.765 09:44:24 -- accel/accel.sh@21 -- # val=0x1 00:06:30.765 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.765 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.765 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val=copy 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val=software 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val=32 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val=32 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val=1 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val=Yes 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.766 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:30.766 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.766 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:32.714 09:44:26 -- accel/accel.sh@21 -- # val= 00:06:32.714 09:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:32.714 09:44:26 -- accel/accel.sh@21 -- # val= 00:06:32.714 09:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:32.714 09:44:26 -- accel/accel.sh@21 -- # val= 00:06:32.714 09:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:32.714 09:44:26 -- accel/accel.sh@21 -- # val= 00:06:32.714 09:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:32.714 09:44:26 -- accel/accel.sh@21 -- # val= 00:06:32.714 09:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:32.714 09:44:26 -- accel/accel.sh@21 -- # val= 00:06:32.714 09:44:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:32.714 09:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:32.714 09:44:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.714 09:44:26 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:32.714 09:44:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.714 00:06:32.714 real 0m4.555s 00:06:32.714 user 0m4.067s 00:06:32.714 sys 0m0.281s 00:06:32.714 ************************************ 00:06:32.714 END TEST accel_copy 00:06:32.714 ************************************ 00:06:32.714 09:44:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.714 09:44:26 -- common/autotest_common.sh@10 -- # set +x 00:06:32.714 09:44:26 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.714 09:44:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:32.714 09:44:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.714 09:44:26 -- common/autotest_common.sh@10 -- # set +x 00:06:32.714 ************************************ 00:06:32.714 START TEST accel_fill 00:06:32.714 ************************************ 00:06:32.714 09:44:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.714 09:44:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.714 09:44:26 -- accel/accel.sh@17 -- # local accel_module 00:06:32.714 09:44:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.714 09:44:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.714 09:44:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.714 09:44:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.714 09:44:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.714 09:44:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.714 09:44:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.714 09:44:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.714 09:44:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.714 09:44:26 -- accel/accel.sh@42 -- # jq -r . 00:06:32.714 [2024-06-10 09:44:26.307523] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:32.714 [2024-06-10 09:44:26.307682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 00:06:32.714 [2024-06-10 09:44:26.472896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.973 [2024-06-10 09:44:26.637641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.877 09:44:28 -- accel/accel.sh@18 -- # out=' 00:06:34.877 SPDK Configuration: 00:06:34.877 Core mask: 0x1 00:06:34.877 00:06:34.877 Accel Perf Configuration: 00:06:34.877 Workload Type: fill 00:06:34.877 Fill pattern: 0x80 00:06:34.877 Transfer size: 4096 bytes 00:06:34.877 Vector count 1 00:06:34.877 Module: software 00:06:34.877 Queue depth: 64 00:06:34.877 Allocate depth: 64 00:06:34.877 # threads/core: 1 00:06:34.877 Run time: 1 seconds 00:06:34.877 Verify: Yes 00:06:34.877 00:06:34.877 Running for 1 seconds... 00:06:34.877 00:06:34.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.877 ------------------------------------------------------------------------------------ 00:06:34.877 0,0 442816/s 1729 MiB/s 0 0 00:06:34.878 ==================================================================================== 00:06:34.878 Total 442816/s 1729 MiB/s 0 0' 00:06:34.878 09:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:34.878 09:44:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.878 09:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:34.878 09:44:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.878 09:44:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.878 09:44:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.878 09:44:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.878 09:44:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.878 09:44:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.878 09:44:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.878 09:44:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.878 09:44:28 -- accel/accel.sh@42 -- # jq -r . 00:06:34.878 [2024-06-10 09:44:28.543886] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:34.878 [2024-06-10 09:44:28.544039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59710 ] 00:06:35.137 [2024-06-10 09:44:28.713690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.137 [2024-06-10 09:44:28.860502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val=0x1 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val=fill 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val=0x80 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val=software 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val=64 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val=64 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val=1 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val=Yes 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:35.396 09:44:29 -- accel/accel.sh@21 -- # val= 00:06:35.396 09:44:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # IFS=: 00:06:35.396 09:44:29 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 09:44:30 -- accel/accel.sh@21 -- # val= 00:06:37.303 09:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 09:44:30 -- accel/accel.sh@21 -- # val= 00:06:37.303 09:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 09:44:30 -- accel/accel.sh@21 -- # val= 00:06:37.303 09:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 09:44:30 -- accel/accel.sh@21 -- # val= 00:06:37.303 09:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 09:44:30 -- accel/accel.sh@21 -- # val= 00:06:37.303 09:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 09:44:30 -- accel/accel.sh@21 -- # val= 00:06:37.303 09:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 09:44:30 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 09:44:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.303 09:44:30 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:37.303 09:44:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.303 00:06:37.303 real 0m4.442s 00:06:37.303 user 0m3.946s 00:06:37.303 sys 0m0.289s 00:06:37.303 09:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.303 ************************************ 00:06:37.303 END TEST accel_fill 00:06:37.303 ************************************ 00:06:37.303 09:44:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.303 09:44:30 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:37.303 09:44:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:37.303 09:44:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.303 09:44:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.303 ************************************ 00:06:37.303 START TEST accel_copy_crc32c 00:06:37.303 ************************************ 00:06:37.303 09:44:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:37.303 09:44:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.303 09:44:30 -- accel/accel.sh@17 -- # local accel_module 00:06:37.303 09:44:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:37.303 09:44:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:37.303 09:44:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.303 09:44:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.303 09:44:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.303 09:44:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.303 09:44:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.303 09:44:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.303 09:44:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.303 09:44:30 -- accel/accel.sh@42 -- # jq -r . 00:06:37.303 [2024-06-10 09:44:30.805626] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:37.303 [2024-06-10 09:44:30.806270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:06:37.303 [2024-06-10 09:44:30.976130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.562 [2024-06-10 09:44:31.123156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.500 09:44:32 -- accel/accel.sh@18 -- # out=' 00:06:39.500 SPDK Configuration: 00:06:39.500 Core mask: 0x1 00:06:39.500 00:06:39.501 Accel Perf Configuration: 00:06:39.501 Workload Type: copy_crc32c 00:06:39.501 CRC-32C seed: 0 00:06:39.501 Vector size: 4096 bytes 00:06:39.501 Transfer size: 4096 bytes 00:06:39.501 Vector count 1 00:06:39.501 Module: software 00:06:39.501 Queue depth: 32 00:06:39.501 Allocate depth: 32 00:06:39.501 # threads/core: 1 00:06:39.501 Run time: 1 seconds 00:06:39.501 Verify: Yes 00:06:39.501 00:06:39.501 Running for 1 seconds... 00:06:39.501 00:06:39.501 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.501 ------------------------------------------------------------------------------------ 00:06:39.501 0,0 229952/s 898 MiB/s 0 0 00:06:39.501 ==================================================================================== 00:06:39.501 Total 229952/s 898 MiB/s 0 0' 00:06:39.501 09:44:32 -- accel/accel.sh@20 -- # IFS=: 00:06:39.501 09:44:32 -- accel/accel.sh@20 -- # read -r var val 00:06:39.501 09:44:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:39.501 09:44:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:39.501 09:44:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.501 09:44:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.501 09:44:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.501 09:44:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.501 09:44:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.501 09:44:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.501 09:44:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.501 09:44:33 -- accel/accel.sh@42 -- # jq -r . 00:06:39.501 [2024-06-10 09:44:33.052047] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:39.501 [2024-06-10 09:44:33.052234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59777 ] 00:06:39.501 [2024-06-10 09:44:33.219256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.770 [2024-06-10 09:44:33.388347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val=0x1 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val=0 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val=software 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val=32 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val=32 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val=1 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val=Yes 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:40.029 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:40.029 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:40.029 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:41.936 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:41.936 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:41.936 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:41.936 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:41.936 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:41.936 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:41.936 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:41.936 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:41.936 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:41.936 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:41.936 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:41.936 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:41.936 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:41.936 09:44:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.936 09:44:35 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:41.936 ************************************ 00:06:41.936 END TEST accel_copy_crc32c 00:06:41.936 ************************************ 00:06:41.936 09:44:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.936 00:06:41.936 real 0m4.514s 00:06:41.936 user 0m4.030s 00:06:41.936 sys 0m0.276s 00:06:41.936 09:44:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.936 09:44:35 -- common/autotest_common.sh@10 -- # set +x 00:06:41.936 09:44:35 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.936 09:44:35 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:41.936 09:44:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.936 09:44:35 -- common/autotest_common.sh@10 -- # set +x 00:06:41.936 ************************************ 00:06:41.936 START TEST accel_copy_crc32c_C2 00:06:41.936 ************************************ 00:06:41.936 09:44:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.936 09:44:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.936 09:44:35 -- accel/accel.sh@17 -- # local accel_module 00:06:41.936 09:44:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:41.936 09:44:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:41.936 09:44:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.936 09:44:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.936 09:44:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.936 09:44:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.936 09:44:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.936 09:44:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.936 09:44:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.936 09:44:35 -- accel/accel.sh@42 -- # jq -r . 00:06:41.936 [2024-06-10 09:44:35.391079] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:41.936 [2024-06-10 09:44:35.391314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59818 ] 00:06:41.936 [2024-06-10 09:44:35.562749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.195 [2024-06-10 09:44:35.724818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.100 09:44:37 -- accel/accel.sh@18 -- # out=' 00:06:44.101 SPDK Configuration: 00:06:44.101 Core mask: 0x1 00:06:44.101 00:06:44.101 Accel Perf Configuration: 00:06:44.101 Workload Type: copy_crc32c 00:06:44.101 CRC-32C seed: 0 00:06:44.101 Vector size: 4096 bytes 00:06:44.101 Transfer size: 8192 bytes 00:06:44.101 Vector count 2 00:06:44.101 Module: software 00:06:44.101 Queue depth: 32 00:06:44.101 Allocate depth: 32 00:06:44.101 # threads/core: 1 00:06:44.101 Run time: 1 seconds 00:06:44.101 Verify: Yes 00:06:44.101 00:06:44.101 Running for 1 seconds... 00:06:44.101 00:06:44.101 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.101 ------------------------------------------------------------------------------------ 00:06:44.101 0,0 153280/s 1197 MiB/s 0 0 00:06:44.101 ==================================================================================== 00:06:44.101 Total 153280/s 598 MiB/s 0 0' 00:06:44.101 09:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:44.101 09:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:44.101 09:44:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:44.101 09:44:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:44.101 09:44:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.101 09:44:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.101 09:44:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.101 09:44:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.101 09:44:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.101 09:44:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.101 09:44:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.101 09:44:37 -- accel/accel.sh@42 -- # jq -r . 00:06:44.101 [2024-06-10 09:44:37.744381] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:44.101 [2024-06-10 09:44:37.744543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59850 ] 00:06:44.359 [2024-06-10 09:44:37.915824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.359 [2024-06-10 09:44:38.084408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val= 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val= 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val=0x1 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val= 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val= 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val=0 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val= 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val=software 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val=32 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val=32 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val=1 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val=Yes 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val= 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.618 09:44:38 -- accel/accel.sh@21 -- # val= 00:06:44.618 09:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # IFS=: 00:06:44.618 09:44:38 -- accel/accel.sh@20 -- # read -r var val 00:06:46.522 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.522 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.522 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.522 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.522 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.522 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.522 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.522 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.522 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.522 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.522 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:46.522 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:46.522 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:46.522 09:44:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.523 09:44:39 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:46.523 09:44:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.523 00:06:46.523 real 0m4.654s 00:06:46.523 user 0m4.143s 00:06:46.523 sys 0m0.305s 00:06:46.523 09:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.523 09:44:39 -- common/autotest_common.sh@10 -- # set +x 00:06:46.523 ************************************ 00:06:46.523 END TEST accel_copy_crc32c_C2 00:06:46.523 ************************************ 00:06:46.523 09:44:40 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:46.523 09:44:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.523 09:44:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.523 09:44:40 -- common/autotest_common.sh@10 -- # set +x 00:06:46.523 ************************************ 00:06:46.523 START TEST accel_dualcast 00:06:46.523 ************************************ 00:06:46.523 09:44:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:46.523 09:44:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.523 09:44:40 -- accel/accel.sh@17 -- # local accel_module 00:06:46.523 09:44:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:46.523 09:44:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:46.523 09:44:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.523 09:44:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.523 09:44:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.523 09:44:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.523 09:44:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.523 09:44:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.523 09:44:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.523 09:44:40 -- accel/accel.sh@42 -- # jq -r . 00:06:46.523 [2024-06-10 09:44:40.073376] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:46.523 [2024-06-10 09:44:40.073530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59896 ] 00:06:46.523 [2024-06-10 09:44:40.240960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.782 [2024-06-10 09:44:40.401807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.685 09:44:42 -- accel/accel.sh@18 -- # out=' 00:06:48.685 SPDK Configuration: 00:06:48.685 Core mask: 0x1 00:06:48.685 00:06:48.685 Accel Perf Configuration: 00:06:48.685 Workload Type: dualcast 00:06:48.685 Transfer size: 4096 bytes 00:06:48.685 Vector count 1 00:06:48.685 Module: software 00:06:48.685 Queue depth: 32 00:06:48.685 Allocate depth: 32 00:06:48.685 # threads/core: 1 00:06:48.685 Run time: 1 seconds 00:06:48.685 Verify: Yes 00:06:48.685 00:06:48.685 Running for 1 seconds... 00:06:48.685 00:06:48.685 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.685 ------------------------------------------------------------------------------------ 00:06:48.685 0,0 296032/s 1156 MiB/s 0 0 00:06:48.685 ==================================================================================== 00:06:48.685 Total 296032/s 1156 MiB/s 0 0' 00:06:48.685 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:48.685 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:48.685 09:44:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:48.685 09:44:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:48.685 09:44:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.685 09:44:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.685 09:44:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.685 09:44:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.685 09:44:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.685 09:44:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.685 09:44:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.685 09:44:42 -- accel/accel.sh@42 -- # jq -r . 00:06:48.685 [2024-06-10 09:44:42.352890] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:48.685 [2024-06-10 09:44:42.353073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:06:48.944 [2024-06-10 09:44:42.524968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.944 [2024-06-10 09:44:42.688167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.202 09:44:42 -- accel/accel.sh@21 -- # val= 00:06:49.202 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.202 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.202 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val= 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val=0x1 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val= 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val= 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val=dualcast 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val= 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val=software 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val=32 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val=32 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val=1 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val=Yes 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val= 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:49.203 09:44:42 -- accel/accel.sh@21 -- # val= 00:06:49.203 09:44:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # IFS=: 00:06:49.203 09:44:42 -- accel/accel.sh@20 -- # read -r var val 00:06:51.108 09:44:44 -- accel/accel.sh@21 -- # val= 00:06:51.109 09:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:51.109 09:44:44 -- accel/accel.sh@21 -- # val= 00:06:51.109 09:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:51.109 09:44:44 -- accel/accel.sh@21 -- # val= 00:06:51.109 09:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:51.109 09:44:44 -- accel/accel.sh@21 -- # val= 00:06:51.109 09:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:51.109 09:44:44 -- accel/accel.sh@21 -- # val= 00:06:51.109 09:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:51.109 09:44:44 -- accel/accel.sh@21 -- # val= 00:06:51.109 09:44:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # IFS=: 00:06:51.109 09:44:44 -- accel/accel.sh@20 -- # read -r var val 00:06:51.109 ************************************ 00:06:51.109 END TEST accel_dualcast 00:06:51.109 ************************************ 00:06:51.109 09:44:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.109 09:44:44 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:51.109 09:44:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.109 00:06:51.109 real 0m4.573s 00:06:51.109 user 0m4.091s 00:06:51.109 sys 0m0.269s 00:06:51.109 09:44:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.109 09:44:44 -- common/autotest_common.sh@10 -- # set +x 00:06:51.109 09:44:44 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:51.109 09:44:44 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:51.109 09:44:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.109 09:44:44 -- common/autotest_common.sh@10 -- # set +x 00:06:51.109 ************************************ 00:06:51.109 START TEST accel_compare 00:06:51.109 ************************************ 00:06:51.109 09:44:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:51.109 09:44:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.109 09:44:44 -- accel/accel.sh@17 -- # local accel_module 00:06:51.109 09:44:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:51.109 09:44:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:51.109 09:44:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.109 09:44:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.109 09:44:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.109 09:44:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.109 09:44:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.109 09:44:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.109 09:44:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.109 09:44:44 -- accel/accel.sh@42 -- # jq -r . 00:06:51.109 [2024-06-10 09:44:44.694971] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:51.109 [2024-06-10 09:44:44.695139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59969 ] 00:06:51.109 [2024-06-10 09:44:44.864147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.368 [2024-06-10 09:44:45.021981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.273 09:44:46 -- accel/accel.sh@18 -- # out=' 00:06:53.273 SPDK Configuration: 00:06:53.273 Core mask: 0x1 00:06:53.273 00:06:53.273 Accel Perf Configuration: 00:06:53.273 Workload Type: compare 00:06:53.273 Transfer size: 4096 bytes 00:06:53.273 Vector count 1 00:06:53.273 Module: software 00:06:53.273 Queue depth: 32 00:06:53.273 Allocate depth: 32 00:06:53.273 # threads/core: 1 00:06:53.273 Run time: 1 seconds 00:06:53.273 Verify: Yes 00:06:53.273 00:06:53.273 Running for 1 seconds... 00:06:53.273 00:06:53.273 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.273 ------------------------------------------------------------------------------------ 00:06:53.273 0,0 400256/s 1563 MiB/s 0 0 00:06:53.273 ==================================================================================== 00:06:53.273 Total 400256/s 1563 MiB/s 0 0' 00:06:53.273 09:44:46 -- accel/accel.sh@20 -- # IFS=: 00:06:53.273 09:44:46 -- accel/accel.sh@20 -- # read -r var val 00:06:53.273 09:44:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:53.273 09:44:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:53.273 09:44:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.273 09:44:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.273 09:44:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.273 09:44:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.273 09:44:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.273 09:44:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.273 09:44:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.273 09:44:46 -- accel/accel.sh@42 -- # jq -r . 00:06:53.273 [2024-06-10 09:44:46.964142] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:53.273 [2024-06-10 09:44:46.964297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59995 ] 00:06:53.532 [2024-06-10 09:44:47.132517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.532 [2024-06-10 09:44:47.294053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.791 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:53.791 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.791 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:53.791 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.791 09:44:47 -- accel/accel.sh@21 -- # val=0x1 00:06:53.791 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.791 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:53.791 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.791 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:53.791 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.791 09:44:47 -- accel/accel.sh@21 -- # val=compare 00:06:53.791 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.791 09:44:47 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.791 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.791 09:44:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.791 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.792 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:53.792 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.792 09:44:47 -- accel/accel.sh@21 -- # val=software 00:06:53.792 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.792 09:44:47 -- accel/accel.sh@21 -- # val=32 00:06:53.792 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.792 09:44:47 -- accel/accel.sh@21 -- # val=32 00:06:53.792 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.792 09:44:47 -- accel/accel.sh@21 -- # val=1 00:06:53.792 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.792 09:44:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.792 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.792 09:44:47 -- accel/accel.sh@21 -- # val=Yes 00:06:53.792 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.792 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:53.792 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:53.792 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:53.792 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:53.792 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:55.723 09:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.723 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.723 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.723 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.723 09:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.724 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.724 09:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.724 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.724 09:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.724 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.724 09:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.724 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.724 09:44:49 -- accel/accel.sh@21 -- # val= 00:06:55.724 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:06:55.724 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.724 09:44:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.724 09:44:49 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:55.724 ************************************ 00:06:55.724 END TEST accel_compare 00:06:55.724 ************************************ 00:06:55.724 09:44:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.724 00:06:55.724 real 0m4.562s 00:06:55.724 user 0m4.047s 00:06:55.724 sys 0m0.307s 00:06:55.724 09:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.724 09:44:49 -- common/autotest_common.sh@10 -- # set +x 00:06:55.724 09:44:49 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:55.724 09:44:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:55.724 09:44:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.724 09:44:49 -- common/autotest_common.sh@10 -- # set +x 00:06:55.724 ************************************ 00:06:55.724 START TEST accel_xor 00:06:55.724 ************************************ 00:06:55.724 09:44:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:55.724 09:44:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.724 09:44:49 -- accel/accel.sh@17 -- # local accel_module 00:06:55.724 09:44:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:55.724 09:44:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:55.724 09:44:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.724 09:44:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.724 09:44:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.724 09:44:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.724 09:44:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.724 09:44:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.724 09:44:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.724 09:44:49 -- accel/accel.sh@42 -- # jq -r . 00:06:55.724 [2024-06-10 09:44:49.314861] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:55.724 [2024-06-10 09:44:49.315017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60036 ] 00:06:55.724 [2024-06-10 09:44:49.486064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.983 [2024-06-10 09:44:49.672641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.888 09:44:51 -- accel/accel.sh@18 -- # out=' 00:06:57.888 SPDK Configuration: 00:06:57.888 Core mask: 0x1 00:06:57.888 00:06:57.888 Accel Perf Configuration: 00:06:57.888 Workload Type: xor 00:06:57.888 Source buffers: 2 00:06:57.888 Transfer size: 4096 bytes 00:06:57.888 Vector count 1 00:06:57.888 Module: software 00:06:57.888 Queue depth: 32 00:06:57.888 Allocate depth: 32 00:06:57.888 # threads/core: 1 00:06:57.888 Run time: 1 seconds 00:06:57.888 Verify: Yes 00:06:57.888 00:06:57.888 Running for 1 seconds... 00:06:57.888 00:06:57.888 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.888 ------------------------------------------------------------------------------------ 00:06:57.888 0,0 216320/s 845 MiB/s 0 0 00:06:57.888 ==================================================================================== 00:06:57.888 Total 216320/s 845 MiB/s 0 0' 00:06:57.888 09:44:51 -- accel/accel.sh@20 -- # IFS=: 00:06:57.888 09:44:51 -- accel/accel.sh@20 -- # read -r var val 00:06:57.888 09:44:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:57.888 09:44:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:57.888 09:44:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.888 09:44:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.888 09:44:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.888 09:44:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.888 09:44:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.888 09:44:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.888 09:44:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.888 09:44:51 -- accel/accel.sh@42 -- # jq -r . 00:06:58.146 [2024-06-10 09:44:51.678019] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:58.146 [2024-06-10 09:44:51.678199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60066 ] 00:06:58.146 [2024-06-10 09:44:51.847203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.405 [2024-06-10 09:44:52.008926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val= 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val= 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val=0x1 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val= 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val= 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val=xor 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val=2 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val= 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val=software 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val=32 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val=32 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val=1 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val=Yes 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val= 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:06:58.664 09:44:52 -- accel/accel.sh@21 -- # val= 00:06:58.664 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:06:58.664 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:00.568 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:00.568 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:00.568 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:00.568 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:00.568 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:00.568 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:00.568 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:00.568 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:00.568 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:00.568 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:00.568 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:00.568 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:00.568 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:00.568 09:44:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.568 09:44:53 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:00.568 09:44:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.568 00:07:00.568 real 0m4.681s 00:07:00.568 user 0m4.169s 00:07:00.568 sys 0m0.305s 00:07:00.568 09:44:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.568 09:44:53 -- common/autotest_common.sh@10 -- # set +x 00:07:00.568 ************************************ 00:07:00.568 END TEST accel_xor 00:07:00.568 ************************************ 00:07:00.568 09:44:53 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:00.568 09:44:53 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:00.568 09:44:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.568 09:44:53 -- common/autotest_common.sh@10 -- # set +x 00:07:00.568 ************************************ 00:07:00.568 START TEST accel_xor 00:07:00.568 ************************************ 00:07:00.568 09:44:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:00.568 09:44:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.568 09:44:53 -- accel/accel.sh@17 -- # local accel_module 00:07:00.568 09:44:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:00.568 09:44:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:00.568 09:44:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.568 09:44:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.568 09:44:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.568 09:44:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.568 09:44:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.568 09:44:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.568 09:44:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.568 09:44:53 -- accel/accel.sh@42 -- # jq -r . 00:07:00.569 [2024-06-10 09:44:54.036776] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:00.569 [2024-06-10 09:44:54.036959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60114 ] 00:07:00.569 [2024-06-10 09:44:54.204826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.827 [2024-06-10 09:44:54.365246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.731 09:44:56 -- accel/accel.sh@18 -- # out=' 00:07:02.731 SPDK Configuration: 00:07:02.731 Core mask: 0x1 00:07:02.731 00:07:02.731 Accel Perf Configuration: 00:07:02.731 Workload Type: xor 00:07:02.731 Source buffers: 3 00:07:02.731 Transfer size: 4096 bytes 00:07:02.731 Vector count 1 00:07:02.731 Module: software 00:07:02.731 Queue depth: 32 00:07:02.731 Allocate depth: 32 00:07:02.731 # threads/core: 1 00:07:02.731 Run time: 1 seconds 00:07:02.731 Verify: Yes 00:07:02.731 00:07:02.731 Running for 1 seconds... 00:07:02.731 00:07:02.731 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.731 ------------------------------------------------------------------------------------ 00:07:02.731 0,0 197216/s 770 MiB/s 0 0 00:07:02.731 ==================================================================================== 00:07:02.731 Total 197216/s 770 MiB/s 0 0' 00:07:02.731 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:02.731 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:02.731 09:44:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:02.731 09:44:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:02.731 09:44:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.731 09:44:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.731 09:44:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.731 09:44:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.731 09:44:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.731 09:44:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.731 09:44:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.731 09:44:56 -- accel/accel.sh@42 -- # jq -r . 00:07:02.731 [2024-06-10 09:44:56.337387] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:02.731 [2024-06-10 09:44:56.337582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60140 ] 00:07:02.989 [2024-06-10 09:44:56.509866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.989 [2024-06-10 09:44:56.677311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val=0x1 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val=xor 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.247 09:44:56 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val=3 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.247 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.247 09:44:56 -- accel/accel.sh@21 -- # val=software 00:07:03.247 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.248 09:44:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.248 09:44:56 -- accel/accel.sh@21 -- # val=32 00:07:03.248 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.248 09:44:56 -- accel/accel.sh@21 -- # val=32 00:07:03.248 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.248 09:44:56 -- accel/accel.sh@21 -- # val=1 00:07:03.248 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.248 09:44:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.248 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.248 09:44:56 -- accel/accel.sh@21 -- # val=Yes 00:07:03.248 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.248 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:03.248 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:03.248 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:03.248 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:03.248 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:05.149 09:44:58 -- accel/accel.sh@21 -- # val= 00:07:05.149 09:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:05.149 09:44:58 -- accel/accel.sh@21 -- # val= 00:07:05.149 09:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:05.149 09:44:58 -- accel/accel.sh@21 -- # val= 00:07:05.149 09:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:05.149 09:44:58 -- accel/accel.sh@21 -- # val= 00:07:05.149 09:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:05.149 09:44:58 -- accel/accel.sh@21 -- # val= 00:07:05.149 09:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:05.149 09:44:58 -- accel/accel.sh@21 -- # val= 00:07:05.149 09:44:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # IFS=: 00:07:05.149 09:44:58 -- accel/accel.sh@20 -- # read -r var val 00:07:05.149 09:44:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.149 09:44:58 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:05.149 09:44:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.149 00:07:05.149 real 0m4.628s 00:07:05.149 user 0m4.134s 00:07:05.149 sys 0m0.284s 00:07:05.149 09:44:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.149 09:44:58 -- common/autotest_common.sh@10 -- # set +x 00:07:05.149 ************************************ 00:07:05.149 END TEST accel_xor 00:07:05.149 ************************************ 00:07:05.149 09:44:58 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:05.149 09:44:58 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:05.149 09:44:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.149 09:44:58 -- common/autotest_common.sh@10 -- # set +x 00:07:05.149 ************************************ 00:07:05.149 START TEST accel_dif_verify 00:07:05.149 ************************************ 00:07:05.149 09:44:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:05.149 09:44:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.149 09:44:58 -- accel/accel.sh@17 -- # local accel_module 00:07:05.149 09:44:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:05.149 09:44:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:05.149 09:44:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.149 09:44:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.149 09:44:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.149 09:44:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.149 09:44:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.149 09:44:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.149 09:44:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.149 09:44:58 -- accel/accel.sh@42 -- # jq -r . 00:07:05.149 [2024-06-10 09:44:58.718594] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:05.149 [2024-06-10 09:44:58.718761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60181 ] 00:07:05.149 [2024-06-10 09:44:58.887189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.408 [2024-06-10 09:44:59.045656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.308 09:45:00 -- accel/accel.sh@18 -- # out=' 00:07:07.308 SPDK Configuration: 00:07:07.308 Core mask: 0x1 00:07:07.308 00:07:07.308 Accel Perf Configuration: 00:07:07.308 Workload Type: dif_verify 00:07:07.308 Vector size: 4096 bytes 00:07:07.308 Transfer size: 4096 bytes 00:07:07.308 Block size: 512 bytes 00:07:07.308 Metadata size: 8 bytes 00:07:07.308 Vector count 1 00:07:07.308 Module: software 00:07:07.308 Queue depth: 32 00:07:07.308 Allocate depth: 32 00:07:07.308 # threads/core: 1 00:07:07.308 Run time: 1 seconds 00:07:07.308 Verify: No 00:07:07.308 00:07:07.308 Running for 1 seconds... 00:07:07.308 00:07:07.308 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.308 ------------------------------------------------------------------------------------ 00:07:07.308 0,0 94368/s 374 MiB/s 0 0 00:07:07.308 ==================================================================================== 00:07:07.308 Total 94368/s 368 MiB/s 0 0' 00:07:07.308 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:07.308 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:07.308 09:45:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:07.308 09:45:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:07.308 09:45:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.308 09:45:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.308 09:45:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.308 09:45:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.308 09:45:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.308 09:45:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.308 09:45:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.308 09:45:00 -- accel/accel.sh@42 -- # jq -r . 00:07:07.308 [2024-06-10 09:45:01.022362] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:07.308 [2024-06-10 09:45:01.022524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60213 ] 00:07:07.567 [2024-06-10 09:45:01.191990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.826 [2024-06-10 09:45:01.362202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val=0x1 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val=dif_verify 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val=software 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val=32 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val=32 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val=1 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val=No 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.826 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:07.826 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:07.826 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:09.728 09:45:03 -- accel/accel.sh@21 -- # val= 00:07:09.728 09:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:09.728 09:45:03 -- accel/accel.sh@21 -- # val= 00:07:09.728 09:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:09.728 09:45:03 -- accel/accel.sh@21 -- # val= 00:07:09.728 09:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:09.728 09:45:03 -- accel/accel.sh@21 -- # val= 00:07:09.728 09:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:09.728 09:45:03 -- accel/accel.sh@21 -- # val= 00:07:09.728 09:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:09.728 09:45:03 -- accel/accel.sh@21 -- # val= 00:07:09.728 09:45:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:09.728 09:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:09.728 09:45:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.728 09:45:03 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:09.728 09:45:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.728 00:07:09.728 real 0m4.634s 00:07:09.728 user 0m4.145s 00:07:09.728 sys 0m0.283s 00:07:09.728 09:45:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.728 ************************************ 00:07:09.728 END TEST accel_dif_verify 00:07:09.728 ************************************ 00:07:09.728 09:45:03 -- common/autotest_common.sh@10 -- # set +x 00:07:09.728 09:45:03 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:09.728 09:45:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:09.728 09:45:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.728 09:45:03 -- common/autotest_common.sh@10 -- # set +x 00:07:09.728 ************************************ 00:07:09.728 START TEST accel_dif_generate 00:07:09.728 ************************************ 00:07:09.728 09:45:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:09.728 09:45:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.728 09:45:03 -- accel/accel.sh@17 -- # local accel_module 00:07:09.728 09:45:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:09.728 09:45:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:09.728 09:45:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.728 09:45:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.728 09:45:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.728 09:45:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.728 09:45:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.728 09:45:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.728 09:45:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.728 09:45:03 -- accel/accel.sh@42 -- # jq -r . 00:07:09.728 [2024-06-10 09:45:03.397739] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:09.728 [2024-06-10 09:45:03.397907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60259 ] 00:07:09.987 [2024-06-10 09:45:03.570144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.987 [2024-06-10 09:45:03.748670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.522 09:45:05 -- accel/accel.sh@18 -- # out=' 00:07:12.522 SPDK Configuration: 00:07:12.522 Core mask: 0x1 00:07:12.522 00:07:12.522 Accel Perf Configuration: 00:07:12.522 Workload Type: dif_generate 00:07:12.522 Vector size: 4096 bytes 00:07:12.522 Transfer size: 4096 bytes 00:07:12.522 Block size: 512 bytes 00:07:12.522 Metadata size: 8 bytes 00:07:12.522 Vector count 1 00:07:12.522 Module: software 00:07:12.522 Queue depth: 32 00:07:12.522 Allocate depth: 32 00:07:12.522 # threads/core: 1 00:07:12.522 Run time: 1 seconds 00:07:12.522 Verify: No 00:07:12.522 00:07:12.522 Running for 1 seconds... 00:07:12.522 00:07:12.522 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.522 ------------------------------------------------------------------------------------ 00:07:12.522 0,0 112864/s 447 MiB/s 0 0 00:07:12.522 ==================================================================================== 00:07:12.522 Total 112864/s 440 MiB/s 0 0' 00:07:12.522 09:45:05 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:05 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:12.522 09:45:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:12.522 09:45:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.522 09:45:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.522 09:45:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.522 09:45:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.522 09:45:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.522 09:45:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.522 09:45:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.522 09:45:05 -- accel/accel.sh@42 -- # jq -r . 00:07:12.522 [2024-06-10 09:45:05.717480] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:12.522 [2024-06-10 09:45:05.717588] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60285 ] 00:07:12.522 [2024-06-10 09:45:05.870035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.522 [2024-06-10 09:45:06.035901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val=0x1 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val=dif_generate 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val=software 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val=32 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val=32 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val=1 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val=No 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.522 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:12.522 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:12.522 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.427 09:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.427 09:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.427 09:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.427 09:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.427 09:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.427 09:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.427 09:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.427 09:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.427 09:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.427 09:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.427 09:45:07 -- accel/accel.sh@21 -- # val= 00:07:14.427 09:45:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # IFS=: 00:07:14.427 09:45:07 -- accel/accel.sh@20 -- # read -r var val 00:07:14.427 09:45:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.427 09:45:08 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:14.427 09:45:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.427 00:07:14.427 real 0m4.666s 00:07:14.427 user 0m4.180s 00:07:14.427 sys 0m0.275s 00:07:14.427 ************************************ 00:07:14.427 END TEST accel_dif_generate 00:07:14.427 ************************************ 00:07:14.427 09:45:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.427 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.427 09:45:08 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:14.427 09:45:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:14.427 09:45:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.427 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.427 ************************************ 00:07:14.427 START TEST accel_dif_generate_copy 00:07:14.427 ************************************ 00:07:14.427 09:45:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:14.427 09:45:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.427 09:45:08 -- accel/accel.sh@17 -- # local accel_module 00:07:14.427 09:45:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:14.427 09:45:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.427 09:45:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:14.427 09:45:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.427 09:45:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.427 09:45:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.427 09:45:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.427 09:45:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.427 09:45:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.427 09:45:08 -- accel/accel.sh@42 -- # jq -r . 00:07:14.427 [2024-06-10 09:45:08.117059] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:14.427 [2024-06-10 09:45:08.117250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60332 ] 00:07:14.685 [2024-06-10 09:45:08.285485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.685 [2024-06-10 09:45:08.445226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.219 09:45:10 -- accel/accel.sh@18 -- # out=' 00:07:17.219 SPDK Configuration: 00:07:17.219 Core mask: 0x1 00:07:17.219 00:07:17.219 Accel Perf Configuration: 00:07:17.219 Workload Type: dif_generate_copy 00:07:17.219 Vector size: 4096 bytes 00:07:17.219 Transfer size: 4096 bytes 00:07:17.219 Vector count 1 00:07:17.219 Module: software 00:07:17.219 Queue depth: 32 00:07:17.219 Allocate depth: 32 00:07:17.219 # threads/core: 1 00:07:17.219 Run time: 1 seconds 00:07:17.219 Verify: No 00:07:17.219 00:07:17.219 Running for 1 seconds... 00:07:17.219 00:07:17.219 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.219 ------------------------------------------------------------------------------------ 00:07:17.219 0,0 81792/s 324 MiB/s 0 0 00:07:17.219 ==================================================================================== 00:07:17.219 Total 81792/s 319 MiB/s 0 0' 00:07:17.219 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.219 09:45:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:17.219 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.219 09:45:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:17.219 09:45:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.219 09:45:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.219 09:45:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.219 09:45:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.220 09:45:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.220 09:45:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.220 09:45:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.220 09:45:10 -- accel/accel.sh@42 -- # jq -r . 00:07:17.220 [2024-06-10 09:45:10.453015] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:17.220 [2024-06-10 09:45:10.453197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60358 ] 00:07:17.220 [2024-06-10 09:45:10.619659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.220 [2024-06-10 09:45:10.793326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val= 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val= 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val=0x1 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val= 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val= 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val= 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val=software 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val=32 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val=32 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val=1 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val=No 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val= 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:17.220 09:45:10 -- accel/accel.sh@21 -- # val= 00:07:17.220 09:45:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # IFS=: 00:07:17.220 09:45:10 -- accel/accel.sh@20 -- # read -r var val 00:07:19.122 09:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.122 09:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.122 09:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.122 09:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.122 09:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.122 09:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.122 09:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.122 09:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.122 09:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.122 09:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.122 09:45:12 -- accel/accel.sh@21 -- # val= 00:07:19.122 09:45:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # IFS=: 00:07:19.122 09:45:12 -- accel/accel.sh@20 -- # read -r var val 00:07:19.122 09:45:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.122 09:45:12 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:19.122 09:45:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.122 00:07:19.122 real 0m4.671s 00:07:19.122 user 0m4.182s 00:07:19.122 sys 0m0.276s 00:07:19.122 09:45:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.122 09:45:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 ************************************ 00:07:19.122 END TEST accel_dif_generate_copy 00:07:19.122 ************************************ 00:07:19.122 09:45:12 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:19.122 09:45:12 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.122 09:45:12 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:19.122 09:45:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.122 09:45:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 ************************************ 00:07:19.122 START TEST accel_comp 00:07:19.122 ************************************ 00:07:19.122 09:45:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.122 09:45:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.122 09:45:12 -- accel/accel.sh@17 -- # local accel_module 00:07:19.122 09:45:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.122 09:45:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.122 09:45:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.122 09:45:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.122 09:45:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.122 09:45:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.122 09:45:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.122 09:45:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.122 09:45:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.122 09:45:12 -- accel/accel.sh@42 -- # jq -r . 00:07:19.122 [2024-06-10 09:45:12.845541] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:19.122 [2024-06-10 09:45:12.845745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60399 ] 00:07:19.381 [2024-06-10 09:45:13.009797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.641 [2024-06-10 09:45:13.195029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.542 09:45:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:21.542 00:07:21.542 SPDK Configuration: 00:07:21.542 Core mask: 0x1 00:07:21.542 00:07:21.542 Accel Perf Configuration: 00:07:21.542 Workload Type: compress 00:07:21.542 Transfer size: 4096 bytes 00:07:21.542 Vector count 1 00:07:21.542 Module: software 00:07:21.542 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.542 Queue depth: 32 00:07:21.542 Allocate depth: 32 00:07:21.542 # threads/core: 1 00:07:21.542 Run time: 1 seconds 00:07:21.542 Verify: No 00:07:21.542 00:07:21.542 Running for 1 seconds... 00:07:21.542 00:07:21.543 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.543 ------------------------------------------------------------------------------------ 00:07:21.543 0,0 48288/s 201 MiB/s 0 0 00:07:21.543 ==================================================================================== 00:07:21.543 Total 48288/s 188 MiB/s 0 0' 00:07:21.543 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:21.543 09:45:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.543 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:21.543 09:45:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.543 09:45:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.543 09:45:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.543 09:45:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.543 09:45:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.543 09:45:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.543 09:45:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.543 09:45:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.543 09:45:15 -- accel/accel.sh@42 -- # jq -r . 00:07:21.543 [2024-06-10 09:45:15.175733] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:21.543 [2024-06-10 09:45:15.175896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60436 ] 00:07:21.802 [2024-06-10 09:45:15.347465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.802 [2024-06-10 09:45:15.510739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val= 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val= 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val= 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val=0x1 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val= 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val= 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val=compress 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val= 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val=software 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val=32 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val=32 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val=1 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val=No 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val= 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:22.062 09:45:15 -- accel/accel.sh@21 -- # val= 00:07:22.062 09:45:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.062 09:45:15 -- accel/accel.sh@20 -- # IFS=: 00:07:22.063 09:45:15 -- accel/accel.sh@20 -- # read -r var val 00:07:23.973 09:45:17 -- accel/accel.sh@21 -- # val= 00:07:23.973 09:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # IFS=: 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # read -r var val 00:07:23.973 09:45:17 -- accel/accel.sh@21 -- # val= 00:07:23.973 09:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # IFS=: 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # read -r var val 00:07:23.973 09:45:17 -- accel/accel.sh@21 -- # val= 00:07:23.973 09:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # IFS=: 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # read -r var val 00:07:23.973 09:45:17 -- accel/accel.sh@21 -- # val= 00:07:23.973 09:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # IFS=: 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # read -r var val 00:07:23.973 09:45:17 -- accel/accel.sh@21 -- # val= 00:07:23.973 09:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # IFS=: 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # read -r var val 00:07:23.973 09:45:17 -- accel/accel.sh@21 -- # val= 00:07:23.973 09:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # IFS=: 00:07:23.973 09:45:17 -- accel/accel.sh@20 -- # read -r var val 00:07:23.973 09:45:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.973 09:45:17 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:23.973 09:45:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.973 00:07:23.973 real 0m4.638s 00:07:23.973 user 0m2.101s 00:07:23.973 sys 0m0.145s 00:07:23.973 09:45:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.973 09:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:23.973 ************************************ 00:07:23.973 END TEST accel_comp 00:07:23.973 ************************************ 00:07:23.973 09:45:17 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:23.973 09:45:17 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:23.973 09:45:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.973 09:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:23.973 ************************************ 00:07:23.973 START TEST accel_decomp 00:07:23.973 ************************************ 00:07:23.973 09:45:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:23.973 09:45:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.973 09:45:17 -- accel/accel.sh@17 -- # local accel_module 00:07:23.973 09:45:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:23.973 09:45:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:23.973 09:45:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.973 09:45:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.973 09:45:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.973 09:45:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.973 09:45:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.973 09:45:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.973 09:45:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.973 09:45:17 -- accel/accel.sh@42 -- # jq -r . 00:07:23.973 [2024-06-10 09:45:17.520195] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:23.973 [2024-06-10 09:45:17.520385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60477 ] 00:07:23.973 [2024-06-10 09:45:17.681023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.232 [2024-06-10 09:45:17.844268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.137 09:45:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.137 00:07:26.137 SPDK Configuration: 00:07:26.138 Core mask: 0x1 00:07:26.138 00:07:26.138 Accel Perf Configuration: 00:07:26.138 Workload Type: decompress 00:07:26.138 Transfer size: 4096 bytes 00:07:26.138 Vector count 1 00:07:26.138 Module: software 00:07:26.138 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.138 Queue depth: 32 00:07:26.138 Allocate depth: 32 00:07:26.138 # threads/core: 1 00:07:26.138 Run time: 1 seconds 00:07:26.138 Verify: Yes 00:07:26.138 00:07:26.138 Running for 1 seconds... 00:07:26.138 00:07:26.138 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.138 ------------------------------------------------------------------------------------ 00:07:26.138 0,0 64064/s 118 MiB/s 0 0 00:07:26.138 ==================================================================================== 00:07:26.138 Total 64064/s 250 MiB/s 0 0' 00:07:26.138 09:45:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.138 09:45:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.138 09:45:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:26.138 09:45:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:26.138 09:45:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.138 09:45:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.138 09:45:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.138 09:45:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.138 09:45:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.138 09:45:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.138 09:45:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.138 09:45:19 -- accel/accel.sh@42 -- # jq -r . 00:07:26.138 [2024-06-10 09:45:19.800689] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:26.138 [2024-06-10 09:45:19.800838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60503 ] 00:07:26.396 [2024-06-10 09:45:19.958675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.396 [2024-06-10 09:45:20.122600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val= 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val= 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val= 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val=0x1 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val= 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val= 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val=decompress 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val= 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val=software 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val=32 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val=32 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val=1 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val=Yes 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val= 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.656 09:45:20 -- accel/accel.sh@21 -- # val= 00:07:26.656 09:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # IFS=: 00:07:26.656 09:45:20 -- accel/accel.sh@20 -- # read -r var val 00:07:28.563 09:45:22 -- accel/accel.sh@21 -- # val= 00:07:28.564 09:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # IFS=: 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # read -r var val 00:07:28.564 09:45:22 -- accel/accel.sh@21 -- # val= 00:07:28.564 09:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # IFS=: 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # read -r var val 00:07:28.564 09:45:22 -- accel/accel.sh@21 -- # val= 00:07:28.564 09:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # IFS=: 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # read -r var val 00:07:28.564 09:45:22 -- accel/accel.sh@21 -- # val= 00:07:28.564 09:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # IFS=: 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # read -r var val 00:07:28.564 09:45:22 -- accel/accel.sh@21 -- # val= 00:07:28.564 09:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # IFS=: 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # read -r var val 00:07:28.564 09:45:22 -- accel/accel.sh@21 -- # val= 00:07:28.564 09:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # IFS=: 00:07:28.564 09:45:22 -- accel/accel.sh@20 -- # read -r var val 00:07:28.564 09:45:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.564 09:45:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.564 09:45:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.564 00:07:28.564 real 0m4.564s 00:07:28.564 user 0m4.081s 00:07:28.564 sys 0m0.269s 00:07:28.564 09:45:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.564 ************************************ 00:07:28.564 END TEST accel_decomp 00:07:28.564 ************************************ 00:07:28.564 09:45:22 -- common/autotest_common.sh@10 -- # set +x 00:07:28.564 09:45:22 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:28.564 09:45:22 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:28.564 09:45:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.564 09:45:22 -- common/autotest_common.sh@10 -- # set +x 00:07:28.564 ************************************ 00:07:28.564 START TEST accel_decmop_full 00:07:28.564 ************************************ 00:07:28.564 09:45:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:28.564 09:45:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.564 09:45:22 -- accel/accel.sh@17 -- # local accel_module 00:07:28.564 09:45:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:28.564 09:45:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:28.564 09:45:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.564 09:45:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.564 09:45:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.564 09:45:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.564 09:45:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.564 09:45:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.564 09:45:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.564 09:45:22 -- accel/accel.sh@42 -- # jq -r . 00:07:28.564 [2024-06-10 09:45:22.133686] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:28.564 [2024-06-10 09:45:22.133875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60544 ] 00:07:28.564 [2024-06-10 09:45:22.301148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.823 [2024-06-10 09:45:22.472017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.727 09:45:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.727 00:07:30.727 SPDK Configuration: 00:07:30.727 Core mask: 0x1 00:07:30.727 00:07:30.727 Accel Perf Configuration: 00:07:30.727 Workload Type: decompress 00:07:30.727 Transfer size: 111250 bytes 00:07:30.727 Vector count 1 00:07:30.727 Module: software 00:07:30.727 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.727 Queue depth: 32 00:07:30.727 Allocate depth: 32 00:07:30.727 # threads/core: 1 00:07:30.727 Run time: 1 seconds 00:07:30.727 Verify: Yes 00:07:30.727 00:07:30.727 Running for 1 seconds... 00:07:30.727 00:07:30.727 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.727 ------------------------------------------------------------------------------------ 00:07:30.727 0,0 4640/s 191 MiB/s 0 0 00:07:30.727 ==================================================================================== 00:07:30.727 Total 4640/s 492 MiB/s 0 0' 00:07:30.727 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:30.727 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:30.727 09:45:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:30.727 09:45:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.727 09:45:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:30.727 09:45:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.727 09:45:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.727 09:45:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.727 09:45:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.727 09:45:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.727 09:45:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.727 09:45:24 -- accel/accel.sh@42 -- # jq -r . 00:07:30.727 [2024-06-10 09:45:24.436810] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:30.727 [2024-06-10 09:45:24.436991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60576 ] 00:07:30.985 [2024-06-10 09:45:24.607020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.245 [2024-06-10 09:45:24.770274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val= 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val= 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val= 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val=0x1 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val= 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val= 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val=decompress 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val= 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val=software 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val=32 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val=32 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val=1 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val=Yes 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val= 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:31.245 09:45:24 -- accel/accel.sh@21 -- # val= 00:07:31.245 09:45:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # IFS=: 00:07:31.245 09:45:24 -- accel/accel.sh@20 -- # read -r var val 00:07:33.149 09:45:26 -- accel/accel.sh@21 -- # val= 00:07:33.149 09:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # IFS=: 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # read -r var val 00:07:33.149 09:45:26 -- accel/accel.sh@21 -- # val= 00:07:33.149 09:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # IFS=: 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # read -r var val 00:07:33.149 09:45:26 -- accel/accel.sh@21 -- # val= 00:07:33.149 09:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # IFS=: 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # read -r var val 00:07:33.149 09:45:26 -- accel/accel.sh@21 -- # val= 00:07:33.149 09:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # IFS=: 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # read -r var val 00:07:33.149 09:45:26 -- accel/accel.sh@21 -- # val= 00:07:33.149 09:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # IFS=: 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # read -r var val 00:07:33.149 09:45:26 -- accel/accel.sh@21 -- # val= 00:07:33.149 09:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # IFS=: 00:07:33.149 09:45:26 -- accel/accel.sh@20 -- # read -r var val 00:07:33.149 09:45:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.149 09:45:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:33.149 09:45:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.149 00:07:33.149 real 0m4.607s 00:07:33.149 user 0m4.122s 00:07:33.149 sys 0m0.272s 00:07:33.149 09:45:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.149 09:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:33.149 ************************************ 00:07:33.149 END TEST accel_decmop_full 00:07:33.149 ************************************ 00:07:33.149 09:45:26 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.149 09:45:26 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:33.149 09:45:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.149 09:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:33.149 ************************************ 00:07:33.149 START TEST accel_decomp_mcore 00:07:33.149 ************************************ 00:07:33.149 09:45:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.149 09:45:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.149 09:45:26 -- accel/accel.sh@17 -- # local accel_module 00:07:33.149 09:45:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.149 09:45:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.149 09:45:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.149 09:45:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.149 09:45:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.149 09:45:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.149 09:45:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.149 09:45:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.149 09:45:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.149 09:45:26 -- accel/accel.sh@42 -- # jq -r . 00:07:33.149 [2024-06-10 09:45:26.791687] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:33.149 [2024-06-10 09:45:26.791842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60621 ] 00:07:33.409 [2024-06-10 09:45:26.961122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.409 [2024-06-10 09:45:27.129977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.409 [2024-06-10 09:45:27.130034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.409 [2024-06-10 09:45:27.130194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.409 [2024-06-10 09:45:27.130205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.940 09:45:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.940 00:07:35.940 SPDK Configuration: 00:07:35.940 Core mask: 0xf 00:07:35.940 00:07:35.940 Accel Perf Configuration: 00:07:35.940 Workload Type: decompress 00:07:35.940 Transfer size: 4096 bytes 00:07:35.940 Vector count 1 00:07:35.940 Module: software 00:07:35.940 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.940 Queue depth: 32 00:07:35.940 Allocate depth: 32 00:07:35.940 # threads/core: 1 00:07:35.940 Run time: 1 seconds 00:07:35.940 Verify: Yes 00:07:35.940 00:07:35.940 Running for 1 seconds... 00:07:35.940 00:07:35.940 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.940 ------------------------------------------------------------------------------------ 00:07:35.940 0,0 53632/s 98 MiB/s 0 0 00:07:35.940 3,0 54752/s 100 MiB/s 0 0 00:07:35.940 2,0 54016/s 99 MiB/s 0 0 00:07:35.940 1,0 53152/s 97 MiB/s 0 0 00:07:35.940 ==================================================================================== 00:07:35.940 Total 215552/s 842 MiB/s 0 0' 00:07:35.940 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:35.940 09:45:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.940 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:35.940 09:45:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.940 09:45:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.940 09:45:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.940 09:45:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.940 09:45:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.940 09:45:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.940 09:45:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.940 09:45:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.940 09:45:29 -- accel/accel.sh@42 -- # jq -r . 00:07:35.940 [2024-06-10 09:45:29.216439] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:35.940 [2024-06-10 09:45:29.216677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60651 ] 00:07:35.940 [2024-06-10 09:45:29.387278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.940 [2024-06-10 09:45:29.552427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.940 [2024-06-10 09:45:29.552571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.940 [2024-06-10 09:45:29.553057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.940 [2024-06-10 09:45:29.553057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val= 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val= 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val= 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val=0xf 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val= 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val= 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val=decompress 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val= 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val=software 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val=32 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val=32 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val=1 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val=Yes 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val= 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:36.199 09:45:29 -- accel/accel.sh@21 -- # val= 00:07:36.199 09:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # IFS=: 00:07:36.199 09:45:29 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@21 -- # val= 00:07:38.105 09:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@21 -- # val= 00:07:38.105 09:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@21 -- # val= 00:07:38.105 09:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@21 -- # val= 00:07:38.105 09:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@21 -- # val= 00:07:38.105 09:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@21 -- # val= 00:07:38.105 09:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@21 -- # val= 00:07:38.105 09:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@21 -- # val= 00:07:38.105 09:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@21 -- # val= 00:07:38.105 09:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.105 09:45:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.105 09:45:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.105 09:45:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:38.105 09:45:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.105 00:07:38.105 real 0m4.791s 00:07:38.105 user 0m14.228s 00:07:38.105 sys 0m0.342s 00:07:38.105 09:45:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.105 09:45:31 -- common/autotest_common.sh@10 -- # set +x 00:07:38.105 ************************************ 00:07:38.105 END TEST accel_decomp_mcore 00:07:38.105 ************************************ 00:07:38.105 09:45:31 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.105 09:45:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:38.105 09:45:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.105 09:45:31 -- common/autotest_common.sh@10 -- # set +x 00:07:38.105 ************************************ 00:07:38.105 START TEST accel_decomp_full_mcore 00:07:38.105 ************************************ 00:07:38.105 09:45:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.105 09:45:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.105 09:45:31 -- accel/accel.sh@17 -- # local accel_module 00:07:38.105 09:45:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.105 09:45:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.105 09:45:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.105 09:45:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.105 09:45:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.105 09:45:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.105 09:45:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.105 09:45:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.105 09:45:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.105 09:45:31 -- accel/accel.sh@42 -- # jq -r . 00:07:38.105 [2024-06-10 09:45:31.629707] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:38.105 [2024-06-10 09:45:31.629856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60701 ] 00:07:38.105 [2024-06-10 09:45:31.798315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.385 [2024-06-10 09:45:31.978534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.385 [2024-06-10 09:45:31.978690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.385 [2024-06-10 09:45:31.978801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.385 [2024-06-10 09:45:31.978959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.290 09:45:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.290 00:07:40.290 SPDK Configuration: 00:07:40.290 Core mask: 0xf 00:07:40.290 00:07:40.290 Accel Perf Configuration: 00:07:40.290 Workload Type: decompress 00:07:40.290 Transfer size: 111250 bytes 00:07:40.290 Vector count 1 00:07:40.290 Module: software 00:07:40.290 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.290 Queue depth: 32 00:07:40.290 Allocate depth: 32 00:07:40.290 # threads/core: 1 00:07:40.290 Run time: 1 seconds 00:07:40.290 Verify: Yes 00:07:40.290 00:07:40.290 Running for 1 seconds... 00:07:40.290 00:07:40.290 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.290 ------------------------------------------------------------------------------------ 00:07:40.290 0,0 4288/s 177 MiB/s 0 0 00:07:40.290 3,0 4320/s 178 MiB/s 0 0 00:07:40.290 2,0 4288/s 177 MiB/s 0 0 00:07:40.290 1,0 4224/s 174 MiB/s 0 0 00:07:40.290 ==================================================================================== 00:07:40.290 Total 17120/s 1816 MiB/s 0 0' 00:07:40.290 09:45:33 -- accel/accel.sh@20 -- # IFS=: 00:07:40.290 09:45:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.290 09:45:33 -- accel/accel.sh@20 -- # read -r var val 00:07:40.290 09:45:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.290 09:45:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.290 09:45:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.290 09:45:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.290 09:45:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.290 09:45:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.290 09:45:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.290 09:45:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.290 09:45:33 -- accel/accel.sh@42 -- # jq -r . 00:07:40.290 [2024-06-10 09:45:34.024180] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:40.290 [2024-06-10 09:45:34.024315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60730 ] 00:07:40.549 [2024-06-10 09:45:34.182854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.807 [2024-06-10 09:45:34.358388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.807 [2024-06-10 09:45:34.358550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.807 [2024-06-10 09:45:34.358683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.807 [2024-06-10 09:45:34.358697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val= 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val= 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val= 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val=0xf 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val= 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val= 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val=decompress 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val= 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val=software 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val=32 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val=32 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val=1 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val=Yes 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val= 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:40.807 09:45:34 -- accel/accel.sh@21 -- # val= 00:07:40.807 09:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # IFS=: 00:07:40.807 09:45:34 -- accel/accel.sh@20 -- # read -r var val 00:07:42.703 09:45:36 -- accel/accel.sh@21 -- # val= 00:07:42.703 09:45:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # IFS=: 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # read -r var val 00:07:42.703 09:45:36 -- accel/accel.sh@21 -- # val= 00:07:42.703 09:45:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # IFS=: 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # read -r var val 00:07:42.703 09:45:36 -- accel/accel.sh@21 -- # val= 00:07:42.703 09:45:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # IFS=: 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # read -r var val 00:07:42.703 09:45:36 -- accel/accel.sh@21 -- # val= 00:07:42.703 09:45:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # IFS=: 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # read -r var val 00:07:42.703 09:45:36 -- accel/accel.sh@21 -- # val= 00:07:42.703 09:45:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # IFS=: 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # read -r var val 00:07:42.703 09:45:36 -- accel/accel.sh@21 -- # val= 00:07:42.703 09:45:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # IFS=: 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # read -r var val 00:07:42.703 09:45:36 -- accel/accel.sh@21 -- # val= 00:07:42.703 09:45:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # IFS=: 00:07:42.703 09:45:36 -- accel/accel.sh@20 -- # read -r var val 00:07:42.703 09:45:36 -- accel/accel.sh@21 -- # val= 00:07:42.703 09:45:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.704 09:45:36 -- accel/accel.sh@20 -- # IFS=: 00:07:42.704 09:45:36 -- accel/accel.sh@20 -- # read -r var val 00:07:42.704 09:45:36 -- accel/accel.sh@21 -- # val= 00:07:42.704 09:45:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.704 09:45:36 -- accel/accel.sh@20 -- # IFS=: 00:07:42.704 09:45:36 -- accel/accel.sh@20 -- # read -r var val 00:07:42.704 09:45:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.704 09:45:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:42.704 09:45:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.704 00:07:42.704 real 0m4.790s 00:07:42.704 user 0m14.287s 00:07:42.704 sys 0m0.318s 00:07:42.704 09:45:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.704 09:45:36 -- common/autotest_common.sh@10 -- # set +x 00:07:42.704 ************************************ 00:07:42.704 END TEST accel_decomp_full_mcore 00:07:42.704 ************************************ 00:07:42.704 09:45:36 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:42.704 09:45:36 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:42.704 09:45:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.704 09:45:36 -- common/autotest_common.sh@10 -- # set +x 00:07:42.704 ************************************ 00:07:42.704 START TEST accel_decomp_mthread 00:07:42.704 ************************************ 00:07:42.704 09:45:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:42.704 09:45:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.704 09:45:36 -- accel/accel.sh@17 -- # local accel_module 00:07:42.704 09:45:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:42.704 09:45:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:42.704 09:45:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.704 09:45:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.704 09:45:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.704 09:45:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.704 09:45:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.704 09:45:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.704 09:45:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.704 09:45:36 -- accel/accel.sh@42 -- # jq -r . 00:07:42.961 [2024-06-10 09:45:36.473020] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:42.961 [2024-06-10 09:45:36.473288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60780 ] 00:07:42.961 [2024-06-10 09:45:36.645740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.219 [2024-06-10 09:45:36.823388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.120 09:45:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:45.120 00:07:45.120 SPDK Configuration: 00:07:45.120 Core mask: 0x1 00:07:45.120 00:07:45.120 Accel Perf Configuration: 00:07:45.120 Workload Type: decompress 00:07:45.120 Transfer size: 4096 bytes 00:07:45.120 Vector count 1 00:07:45.120 Module: software 00:07:45.120 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.120 Queue depth: 32 00:07:45.120 Allocate depth: 32 00:07:45.120 # threads/core: 2 00:07:45.120 Run time: 1 seconds 00:07:45.120 Verify: Yes 00:07:45.120 00:07:45.120 Running for 1 seconds... 00:07:45.120 00:07:45.120 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.120 ------------------------------------------------------------------------------------ 00:07:45.120 0,1 32576/s 60 MiB/s 0 0 00:07:45.120 0,0 32448/s 59 MiB/s 0 0 00:07:45.120 ==================================================================================== 00:07:45.120 Total 65024/s 254 MiB/s 0 0' 00:07:45.120 09:45:38 -- accel/accel.sh@20 -- # IFS=: 00:07:45.120 09:45:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.120 09:45:38 -- accel/accel.sh@20 -- # read -r var val 00:07:45.120 09:45:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.120 09:45:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.120 09:45:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.120 09:45:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.120 09:45:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.120 09:45:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.120 09:45:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.120 09:45:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.120 09:45:38 -- accel/accel.sh@42 -- # jq -r . 00:07:45.120 [2024-06-10 09:45:38.795075] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:45.120 [2024-06-10 09:45:38.795287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60811 ] 00:07:45.379 [2024-06-10 09:45:38.968574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.379 [2024-06-10 09:45:39.141062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val= 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val= 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val= 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val=0x1 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val= 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val= 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val=decompress 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val= 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val=software 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val=32 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val=32 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val=2 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val=Yes 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val= 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:45.638 09:45:39 -- accel/accel.sh@21 -- # val= 00:07:45.638 09:45:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # IFS=: 00:07:45.638 09:45:39 -- accel/accel.sh@20 -- # read -r var val 00:07:47.541 09:45:41 -- accel/accel.sh@21 -- # val= 00:07:47.541 09:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # IFS=: 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # read -r var val 00:07:47.541 09:45:41 -- accel/accel.sh@21 -- # val= 00:07:47.541 09:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # IFS=: 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # read -r var val 00:07:47.541 09:45:41 -- accel/accel.sh@21 -- # val= 00:07:47.541 09:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # IFS=: 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # read -r var val 00:07:47.541 09:45:41 -- accel/accel.sh@21 -- # val= 00:07:47.541 09:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # IFS=: 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # read -r var val 00:07:47.541 09:45:41 -- accel/accel.sh@21 -- # val= 00:07:47.541 09:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # IFS=: 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # read -r var val 00:07:47.541 09:45:41 -- accel/accel.sh@21 -- # val= 00:07:47.541 09:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # IFS=: 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # read -r var val 00:07:47.541 09:45:41 -- accel/accel.sh@21 -- # val= 00:07:47.541 09:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # IFS=: 00:07:47.541 09:45:41 -- accel/accel.sh@20 -- # read -r var val 00:07:47.541 09:45:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.541 09:45:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.541 09:45:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.541 00:07:47.541 real 0m4.646s 00:07:47.541 user 0m4.129s 00:07:47.541 sys 0m0.305s 00:07:47.541 09:45:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.541 09:45:41 -- common/autotest_common.sh@10 -- # set +x 00:07:47.541 ************************************ 00:07:47.541 END TEST accel_decomp_mthread 00:07:47.541 ************************************ 00:07:47.541 09:45:41 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.541 09:45:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:47.541 09:45:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.541 09:45:41 -- common/autotest_common.sh@10 -- # set +x 00:07:47.541 ************************************ 00:07:47.541 START TEST accel_deomp_full_mthread 00:07:47.541 ************************************ 00:07:47.541 09:45:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.541 09:45:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.541 09:45:41 -- accel/accel.sh@17 -- # local accel_module 00:07:47.541 09:45:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.541 09:45:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.541 09:45:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.541 09:45:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.541 09:45:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.541 09:45:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.541 09:45:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.541 09:45:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.541 09:45:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.541 09:45:41 -- accel/accel.sh@42 -- # jq -r . 00:07:47.541 [2024-06-10 09:45:41.166769] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:47.541 [2024-06-10 09:45:41.166911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60852 ] 00:07:47.799 [2024-06-10 09:45:41.331943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.799 [2024-06-10 09:45:41.536353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.328 09:45:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:50.328 00:07:50.328 SPDK Configuration: 00:07:50.328 Core mask: 0x1 00:07:50.328 00:07:50.328 Accel Perf Configuration: 00:07:50.328 Workload Type: decompress 00:07:50.328 Transfer size: 111250 bytes 00:07:50.328 Vector count 1 00:07:50.328 Module: software 00:07:50.328 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.328 Queue depth: 32 00:07:50.328 Allocate depth: 32 00:07:50.328 # threads/core: 2 00:07:50.328 Run time: 1 seconds 00:07:50.328 Verify: Yes 00:07:50.328 00:07:50.328 Running for 1 seconds... 00:07:50.328 00:07:50.328 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.328 ------------------------------------------------------------------------------------ 00:07:50.328 0,1 2368/s 97 MiB/s 0 0 00:07:50.328 0,0 2368/s 97 MiB/s 0 0 00:07:50.328 ==================================================================================== 00:07:50.328 Total 4736/s 502 MiB/s 0 0' 00:07:50.328 09:45:43 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:43 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.328 09:45:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.328 09:45:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.328 09:45:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.328 09:45:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.328 09:45:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.328 09:45:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.328 09:45:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.328 09:45:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.328 09:45:43 -- accel/accel.sh@42 -- # jq -r . 00:07:50.328 [2024-06-10 09:45:43.544135] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:50.328 [2024-06-10 09:45:43.544330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60878 ] 00:07:50.328 [2024-06-10 09:45:43.715882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.328 [2024-06-10 09:45:43.885454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val= 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val= 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val= 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val=0x1 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val= 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val= 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val=decompress 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val= 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val=software 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val=32 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val=32 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val=2 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val=Yes 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val= 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:50.328 09:45:44 -- accel/accel.sh@21 -- # val= 00:07:50.328 09:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # IFS=: 00:07:50.328 09:45:44 -- accel/accel.sh@20 -- # read -r var val 00:07:52.230 09:45:45 -- accel/accel.sh@21 -- # val= 00:07:52.230 09:45:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.230 09:45:45 -- accel/accel.sh@20 -- # IFS=: 00:07:52.230 09:45:45 -- accel/accel.sh@20 -- # read -r var val 00:07:52.230 09:45:45 -- accel/accel.sh@21 -- # val= 00:07:52.230 09:45:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.230 09:45:45 -- accel/accel.sh@20 -- # IFS=: 00:07:52.230 09:45:45 -- accel/accel.sh@20 -- # read -r var val 00:07:52.230 09:45:45 -- accel/accel.sh@21 -- # val= 00:07:52.230 09:45:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # IFS=: 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # read -r var val 00:07:52.231 09:45:45 -- accel/accel.sh@21 -- # val= 00:07:52.231 09:45:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # IFS=: 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # read -r var val 00:07:52.231 09:45:45 -- accel/accel.sh@21 -- # val= 00:07:52.231 09:45:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # IFS=: 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # read -r var val 00:07:52.231 09:45:45 -- accel/accel.sh@21 -- # val= 00:07:52.231 09:45:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # IFS=: 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # read -r var val 00:07:52.231 09:45:45 -- accel/accel.sh@21 -- # val= 00:07:52.231 09:45:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # IFS=: 00:07:52.231 09:45:45 -- accel/accel.sh@20 -- # read -r var val 00:07:52.231 ************************************ 00:07:52.231 END TEST accel_deomp_full_mthread 00:07:52.231 ************************************ 00:07:52.231 09:45:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.231 09:45:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:52.231 09:45:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.231 00:07:52.231 real 0m4.739s 00:07:52.231 user 0m4.240s 00:07:52.231 sys 0m0.285s 00:07:52.231 09:45:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.231 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:07:52.231 09:45:45 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:52.231 09:45:45 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:52.231 09:45:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:52.231 09:45:45 -- accel/accel.sh@129 -- # build_accel_config 00:07:52.231 09:45:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.231 09:45:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.231 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:07:52.231 09:45:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.231 09:45:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.231 09:45:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.231 09:45:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.231 09:45:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.231 09:45:45 -- accel/accel.sh@42 -- # jq -r . 00:07:52.231 ************************************ 00:07:52.231 START TEST accel_dif_functional_tests 00:07:52.231 ************************************ 00:07:52.231 09:45:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:52.489 [2024-06-10 09:45:46.003776] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:52.489 [2024-06-10 09:45:46.003927] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60926 ] 00:07:52.489 [2024-06-10 09:45:46.172394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.748 [2024-06-10 09:45:46.338250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.748 [2024-06-10 09:45:46.338403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.748 [2024-06-10 09:45:46.338410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.017 00:07:53.017 00:07:53.017 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.017 http://cunit.sourceforge.net/ 00:07:53.017 00:07:53.017 00:07:53.017 Suite: accel_dif 00:07:53.017 Test: verify: DIF generated, GUARD check ...passed 00:07:53.017 Test: verify: DIF generated, APPTAG check ...passed 00:07:53.017 Test: verify: DIF generated, REFTAG check ...passed 00:07:53.017 Test: verify: DIF not generated, GUARD check ...passed 00:07:53.017 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 09:45:46.605044] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.017 [2024-06-10 09:45:46.605220] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.017 [2024-06-10 09:45:46.605288] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.017 passed 00:07:53.017 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 09:45:46.605425] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.017 passed 00:07:53.017 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:53.017 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 09:45:46.605484] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.017 [2024-06-10 09:45:46.605668] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.017 passed 00:07:53.017 Test: verify: APPTAG incorrect, no APPTAG check ...passed[2024-06-10 09:45:46.605770] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:53.017 00:07:53.017 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:53.017 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:53.017 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 09:45:46.606428] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:53.017 passed 00:07:53.017 Test: generate copy: DIF generated, GUARD check ...passed 00:07:53.017 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:53.017 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:53.017 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:53.017 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:53.017 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:53.017 Test: generate copy: iovecs-len validate ...passed 00:07:53.017 Test: generate copy: buffer alignment validate ...passed 00:07:53.017 00:07:53.017 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.017 suites 1 1 n/a 0 0 00:07:53.017 tests 20 20 20 0 0 00:07:53.017 asserts 204 204 204 0 n/a 00:07:53.017 00:07:53.017 Elapsed time = 0.007 seconds 00:07:53.017 [2024-06-10 09:45:46.607256] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:53.967 00:07:53.967 real 0m1.717s 00:07:53.967 user 0m3.327s 00:07:53.967 sys 0m0.189s 00:07:53.967 ************************************ 00:07:53.967 END TEST accel_dif_functional_tests 00:07:53.967 ************************************ 00:07:53.967 09:45:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.967 09:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.967 00:07:53.967 real 1m41.439s 00:07:53.967 user 1m52.053s 00:07:53.967 sys 0m7.506s 00:07:53.967 09:45:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.967 ************************************ 00:07:53.967 END TEST accel 00:07:53.967 ************************************ 00:07:53.967 09:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.967 09:45:47 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:53.967 09:45:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:53.967 09:45:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.967 09:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:53.967 ************************************ 00:07:53.967 START TEST accel_rpc 00:07:53.967 ************************************ 00:07:53.967 09:45:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:54.226 * Looking for test storage... 00:07:54.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:54.226 09:45:47 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.226 09:45:47 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61007 00:07:54.226 09:45:47 -- accel/accel_rpc.sh@15 -- # waitforlisten 61007 00:07:54.226 09:45:47 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:54.226 09:45:47 -- common/autotest_common.sh@819 -- # '[' -z 61007 ']' 00:07:54.226 09:45:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.226 09:45:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:54.226 09:45:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.226 09:45:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:54.226 09:45:47 -- common/autotest_common.sh@10 -- # set +x 00:07:54.226 [2024-06-10 09:45:47.889130] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:54.226 [2024-06-10 09:45:47.889518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61007 ] 00:07:54.485 [2024-06-10 09:45:48.050587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.485 [2024-06-10 09:45:48.220036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.485 [2024-06-10 09:45:48.220504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.422 09:45:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.422 09:45:48 -- common/autotest_common.sh@852 -- # return 0 00:07:55.422 09:45:48 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:55.422 09:45:48 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:55.422 09:45:48 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:55.422 09:45:48 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:55.422 09:45:48 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:55.422 09:45:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:55.422 09:45:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.422 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:07:55.422 ************************************ 00:07:55.422 START TEST accel_assign_opcode 00:07:55.422 ************************************ 00:07:55.422 09:45:48 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:55.422 09:45:48 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:55.422 09:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.422 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:07:55.422 [2024-06-10 09:45:48.865557] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:55.422 09:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.422 09:45:48 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:55.422 09:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.422 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:07:55.422 [2024-06-10 09:45:48.873513] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:55.422 09:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.422 09:45:48 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:55.422 09:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.422 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:07:55.990 09:45:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.990 09:45:49 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:55.990 09:45:49 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:55.990 09:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.990 09:45:49 -- common/autotest_common.sh@10 -- # set +x 00:07:55.990 09:45:49 -- accel/accel_rpc.sh@42 -- # grep software 00:07:55.990 09:45:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.990 software 00:07:55.990 00:07:55.990 real 0m0.677s 00:07:55.990 user 0m0.055s 00:07:55.990 sys 0m0.009s 00:07:55.990 09:45:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.990 ************************************ 00:07:55.990 END TEST accel_assign_opcode 00:07:55.990 ************************************ 00:07:55.990 09:45:49 -- common/autotest_common.sh@10 -- # set +x 00:07:55.990 09:45:49 -- accel/accel_rpc.sh@55 -- # killprocess 61007 00:07:55.990 09:45:49 -- common/autotest_common.sh@926 -- # '[' -z 61007 ']' 00:07:55.990 09:45:49 -- common/autotest_common.sh@930 -- # kill -0 61007 00:07:55.990 09:45:49 -- common/autotest_common.sh@931 -- # uname 00:07:55.990 09:45:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:55.990 09:45:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61007 00:07:55.990 killing process with pid 61007 00:07:55.990 09:45:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:55.990 09:45:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:55.990 09:45:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61007' 00:07:55.990 09:45:49 -- common/autotest_common.sh@945 -- # kill 61007 00:07:55.990 09:45:49 -- common/autotest_common.sh@950 -- # wait 61007 00:07:57.896 00:07:57.896 real 0m3.774s 00:07:57.897 user 0m3.899s 00:07:57.897 sys 0m0.422s 00:07:57.897 09:45:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.897 ************************************ 00:07:57.897 END TEST accel_rpc 00:07:57.897 ************************************ 00:07:57.897 09:45:51 -- common/autotest_common.sh@10 -- # set +x 00:07:57.897 09:45:51 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:57.897 09:45:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.897 09:45:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.897 09:45:51 -- common/autotest_common.sh@10 -- # set +x 00:07:57.897 ************************************ 00:07:57.897 START TEST app_cmdline 00:07:57.897 ************************************ 00:07:57.897 09:45:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:57.897 * Looking for test storage... 00:07:57.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:57.897 09:45:51 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:57.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.897 09:45:51 -- app/cmdline.sh@17 -- # spdk_tgt_pid=61116 00:07:57.897 09:45:51 -- app/cmdline.sh@18 -- # waitforlisten 61116 00:07:57.897 09:45:51 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:57.897 09:45:51 -- common/autotest_common.sh@819 -- # '[' -z 61116 ']' 00:07:57.897 09:45:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.897 09:45:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:57.897 09:45:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.897 09:45:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:57.897 09:45:51 -- common/autotest_common.sh@10 -- # set +x 00:07:58.156 [2024-06-10 09:45:51.723448] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:58.156 [2024-06-10 09:45:51.723811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61116 ] 00:07:58.156 [2024-06-10 09:45:51.889535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.415 [2024-06-10 09:45:52.052376] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.415 [2024-06-10 09:45:52.052918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.792 09:45:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:59.792 09:45:53 -- common/autotest_common.sh@852 -- # return 0 00:07:59.792 09:45:53 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:59.792 { 00:07:59.792 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:07:59.792 "fields": { 00:07:59.792 "major": 24, 00:07:59.792 "minor": 1, 00:07:59.792 "patch": 1, 00:07:59.792 "suffix": "-pre", 00:07:59.792 "commit": "130b9406a" 00:07:59.792 } 00:07:59.792 } 00:07:59.792 09:45:53 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:59.792 09:45:53 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:59.792 09:45:53 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:59.792 09:45:53 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:59.792 09:45:53 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:59.792 09:45:53 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:59.792 09:45:53 -- app/cmdline.sh@26 -- # sort 00:07:59.792 09:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.792 09:45:53 -- common/autotest_common.sh@10 -- # set +x 00:07:59.792 09:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.051 09:45:53 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:00.051 09:45:53 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:00.051 09:45:53 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.051 09:45:53 -- common/autotest_common.sh@640 -- # local es=0 00:08:00.051 09:45:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.051 09:45:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.051 09:45:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:00.051 09:45:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.051 09:45:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:00.051 09:45:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.051 09:45:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:00.051 09:45:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.051 09:45:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:00.051 09:45:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.051 request: 00:08:00.051 { 00:08:00.051 "method": "env_dpdk_get_mem_stats", 00:08:00.051 "req_id": 1 00:08:00.051 } 00:08:00.051 Got JSON-RPC error response 00:08:00.051 response: 00:08:00.051 { 00:08:00.051 "code": -32601, 00:08:00.051 "message": "Method not found" 00:08:00.051 } 00:08:00.310 09:45:53 -- common/autotest_common.sh@643 -- # es=1 00:08:00.310 09:45:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:00.310 09:45:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:00.310 09:45:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:00.310 09:45:53 -- app/cmdline.sh@1 -- # killprocess 61116 00:08:00.310 09:45:53 -- common/autotest_common.sh@926 -- # '[' -z 61116 ']' 00:08:00.310 09:45:53 -- common/autotest_common.sh@930 -- # kill -0 61116 00:08:00.310 09:45:53 -- common/autotest_common.sh@931 -- # uname 00:08:00.310 09:45:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:00.310 09:45:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61116 00:08:00.310 killing process with pid 61116 00:08:00.310 09:45:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:00.310 09:45:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:00.310 09:45:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61116' 00:08:00.310 09:45:53 -- common/autotest_common.sh@945 -- # kill 61116 00:08:00.310 09:45:53 -- common/autotest_common.sh@950 -- # wait 61116 00:08:02.215 00:08:02.215 real 0m4.182s 00:08:02.215 user 0m4.763s 00:08:02.215 sys 0m0.509s 00:08:02.215 ************************************ 00:08:02.216 END TEST app_cmdline 00:08:02.216 ************************************ 00:08:02.216 09:45:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.216 09:45:55 -- common/autotest_common.sh@10 -- # set +x 00:08:02.216 09:45:55 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:02.216 09:45:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:02.216 09:45:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.216 09:45:55 -- common/autotest_common.sh@10 -- # set +x 00:08:02.216 ************************************ 00:08:02.216 START TEST version 00:08:02.216 ************************************ 00:08:02.216 09:45:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:02.216 * Looking for test storage... 00:08:02.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:02.216 09:45:55 -- app/version.sh@17 -- # get_header_version major 00:08:02.216 09:45:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.216 09:45:55 -- app/version.sh@14 -- # cut -f2 00:08:02.216 09:45:55 -- app/version.sh@14 -- # tr -d '"' 00:08:02.216 09:45:55 -- app/version.sh@17 -- # major=24 00:08:02.216 09:45:55 -- app/version.sh@18 -- # get_header_version minor 00:08:02.216 09:45:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.216 09:45:55 -- app/version.sh@14 -- # cut -f2 00:08:02.216 09:45:55 -- app/version.sh@14 -- # tr -d '"' 00:08:02.216 09:45:55 -- app/version.sh@18 -- # minor=1 00:08:02.216 09:45:55 -- app/version.sh@19 -- # get_header_version patch 00:08:02.216 09:45:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.216 09:45:55 -- app/version.sh@14 -- # cut -f2 00:08:02.216 09:45:55 -- app/version.sh@14 -- # tr -d '"' 00:08:02.216 09:45:55 -- app/version.sh@19 -- # patch=1 00:08:02.216 09:45:55 -- app/version.sh@20 -- # get_header_version suffix 00:08:02.216 09:45:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.216 09:45:55 -- app/version.sh@14 -- # cut -f2 00:08:02.216 09:45:55 -- app/version.sh@14 -- # tr -d '"' 00:08:02.216 09:45:55 -- app/version.sh@20 -- # suffix=-pre 00:08:02.216 09:45:55 -- app/version.sh@22 -- # version=24.1 00:08:02.216 09:45:55 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:02.216 09:45:55 -- app/version.sh@25 -- # version=24.1.1 00:08:02.216 09:45:55 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:02.216 09:45:55 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:02.216 09:45:55 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:02.216 09:45:55 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:02.216 09:45:55 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:02.216 00:08:02.216 real 0m0.142s 00:08:02.216 user 0m0.093s 00:08:02.216 sys 0m0.079s 00:08:02.216 ************************************ 00:08:02.216 END TEST version 00:08:02.216 09:45:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.216 09:45:55 -- common/autotest_common.sh@10 -- # set +x 00:08:02.216 ************************************ 00:08:02.216 09:45:55 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:02.216 09:45:55 -- spdk/autotest.sh@204 -- # uname -s 00:08:02.216 09:45:55 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:02.216 09:45:55 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:02.216 09:45:55 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:02.216 09:45:55 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:08:02.216 09:45:55 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:02.216 09:45:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:02.216 09:45:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.216 09:45:55 -- common/autotest_common.sh@10 -- # set +x 00:08:02.216 ************************************ 00:08:02.216 START TEST blockdev_nvme 00:08:02.216 ************************************ 00:08:02.216 09:45:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:02.475 * Looking for test storage... 00:08:02.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:02.475 09:45:56 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:02.475 09:45:56 -- bdev/nbd_common.sh@6 -- # set -e 00:08:02.475 09:45:56 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:02.475 09:45:56 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:02.475 09:45:56 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:02.475 09:45:56 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:02.475 09:45:56 -- bdev/blockdev.sh@18 -- # : 00:08:02.475 09:45:56 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:02.475 09:45:56 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:02.475 09:45:56 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:02.475 09:45:56 -- bdev/blockdev.sh@672 -- # uname -s 00:08:02.475 09:45:56 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:02.475 09:45:56 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:02.475 09:45:56 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:08:02.475 09:45:56 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:02.475 09:45:56 -- bdev/blockdev.sh@682 -- # dek= 00:08:02.475 09:45:56 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:02.475 09:45:56 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:02.475 09:45:56 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:02.475 09:45:56 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:08:02.475 09:45:56 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:08:02.475 09:45:56 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:02.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.475 09:45:56 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61289 00:08:02.475 09:45:56 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:02.475 09:45:56 -- bdev/blockdev.sh@47 -- # waitforlisten 61289 00:08:02.475 09:45:56 -- common/autotest_common.sh@819 -- # '[' -z 61289 ']' 00:08:02.475 09:45:56 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:02.475 09:45:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.475 09:45:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:02.475 09:45:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.475 09:45:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:02.475 09:45:56 -- common/autotest_common.sh@10 -- # set +x 00:08:02.475 [2024-06-10 09:45:56.176726] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:02.475 [2024-06-10 09:45:56.177162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61289 ] 00:08:02.734 [2024-06-10 09:45:56.345295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.993 [2024-06-10 09:45:56.517291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:02.993 [2024-06-10 09:45:56.517778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.371 09:45:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:04.371 09:45:57 -- common/autotest_common.sh@852 -- # return 0 00:08:04.371 09:45:57 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:04.371 09:45:57 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:08:04.371 09:45:57 -- bdev/blockdev.sh@79 -- # local json 00:08:04.371 09:45:57 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:04.371 09:45:57 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:04.371 09:45:57 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:04.371 09:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.371 09:45:57 -- common/autotest_common.sh@10 -- # set +x 00:08:04.631 09:45:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.631 09:45:58 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:04.631 09:45:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.631 09:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:04.631 09:45:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.631 09:45:58 -- bdev/blockdev.sh@738 -- # cat 00:08:04.631 09:45:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:04.631 09:45:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.631 09:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:04.631 09:45:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.631 09:45:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:04.631 09:45:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.631 09:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:04.631 09:45:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.631 09:45:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:04.631 09:45:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.631 09:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:04.631 09:45:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.631 09:45:58 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:04.631 09:45:58 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:04.631 09:45:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.631 09:45:58 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:04.631 09:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:04.631 09:45:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.631 09:45:58 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:04.631 09:45:58 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:04.632 09:45:58 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d3599b9d-9b98-406f-97c1-b3b42ca3c478"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d3599b9d-9b98-406f-97c1-b3b42ca3c478",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "74f5acc6-0719-4d7e-b3dc-4211218b17bf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "74f5acc6-0719-4d7e-b3dc-4211218b17bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e24eea43-5ab1-46de-a042-af0b1aca0fa2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e24eea43-5ab1-46de-a042-af0b1aca0fa2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "100c8a38-cfd9-4fe2-8e17-e10cebafebef"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "100c8a38-cfd9-4fe2-8e17-e10cebafebef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9b1df7e1-b97b-4e17-b7f5-f8e9c84b9c1a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9b1df7e1-b97b-4e17-b7f5-f8e9c84b9c1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7ca2ad9c-bd48-4d68-af6d-465f9724ef40"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7ca2ad9c-bd48-4d68-af6d-465f9724ef40",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:04.891 09:45:58 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:04.891 09:45:58 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:08:04.891 09:45:58 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:04.891 09:45:58 -- bdev/blockdev.sh@752 -- # killprocess 61289 00:08:04.891 09:45:58 -- common/autotest_common.sh@926 -- # '[' -z 61289 ']' 00:08:04.891 09:45:58 -- common/autotest_common.sh@930 -- # kill -0 61289 00:08:04.891 09:45:58 -- common/autotest_common.sh@931 -- # uname 00:08:04.891 09:45:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:04.891 09:45:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61289 00:08:04.891 killing process with pid 61289 00:08:04.891 09:45:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:04.891 09:45:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:04.891 09:45:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61289' 00:08:04.891 09:45:58 -- common/autotest_common.sh@945 -- # kill 61289 00:08:04.891 09:45:58 -- common/autotest_common.sh@950 -- # wait 61289 00:08:06.813 09:46:00 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:06.813 09:46:00 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:06.813 09:46:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:06.813 09:46:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.813 09:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:06.813 ************************************ 00:08:06.813 START TEST bdev_hello_world 00:08:06.813 ************************************ 00:08:06.813 09:46:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:06.813 [2024-06-10 09:46:00.416538] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:06.813 [2024-06-10 09:46:00.416945] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61387 ] 00:08:07.072 [2024-06-10 09:46:00.584650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.072 [2024-06-10 09:46:00.752702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.639 [2024-06-10 09:46:01.322390] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:07.639 [2024-06-10 09:46:01.322465] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:07.639 [2024-06-10 09:46:01.322514] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:07.639 [2024-06-10 09:46:01.325722] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:07.639 [2024-06-10 09:46:01.326467] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:07.639 [2024-06-10 09:46:01.326526] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:07.639 [2024-06-10 09:46:01.326786] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:07.639 00:08:07.639 [2024-06-10 09:46:01.326818] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:08.599 00:08:08.599 real 0m2.000s 00:08:08.599 user 0m1.687s 00:08:08.599 sys 0m0.204s 00:08:08.599 09:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.599 09:46:02 -- common/autotest_common.sh@10 -- # set +x 00:08:08.599 ************************************ 00:08:08.599 END TEST bdev_hello_world 00:08:08.599 ************************************ 00:08:08.858 09:46:02 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:08.858 09:46:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:08.858 09:46:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.858 09:46:02 -- common/autotest_common.sh@10 -- # set +x 00:08:08.858 ************************************ 00:08:08.858 START TEST bdev_bounds 00:08:08.858 ************************************ 00:08:08.858 09:46:02 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:08:08.858 Process bdevio pid: 61429 00:08:08.858 09:46:02 -- bdev/blockdev.sh@288 -- # bdevio_pid=61429 00:08:08.858 09:46:02 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:08.858 09:46:02 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61429' 00:08:08.858 09:46:02 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:08.858 09:46:02 -- bdev/blockdev.sh@291 -- # waitforlisten 61429 00:08:08.858 09:46:02 -- common/autotest_common.sh@819 -- # '[' -z 61429 ']' 00:08:08.858 09:46:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.858 09:46:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:08.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.858 09:46:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.858 09:46:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:08.858 09:46:02 -- common/autotest_common.sh@10 -- # set +x 00:08:08.858 [2024-06-10 09:46:02.464833] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:08.858 [2024-06-10 09:46:02.465281] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61429 ] 00:08:09.116 [2024-06-10 09:46:02.634639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.116 [2024-06-10 09:46:02.803508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.116 [2024-06-10 09:46:02.803631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.116 [2024-06-10 09:46:02.803660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.491 09:46:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:10.491 09:46:04 -- common/autotest_common.sh@852 -- # return 0 00:08:10.491 09:46:04 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:10.491 I/O targets: 00:08:10.491 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:10.491 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:10.491 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:10.491 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:10.491 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:10.491 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:10.491 00:08:10.491 00:08:10.491 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.491 http://cunit.sourceforge.net/ 00:08:10.491 00:08:10.491 00:08:10.491 Suite: bdevio tests on: Nvme3n1 00:08:10.491 Test: blockdev write read block ...passed 00:08:10.491 Test: blockdev write zeroes read block ...passed 00:08:10.491 Test: blockdev write zeroes read no split ...passed 00:08:10.749 Test: blockdev write zeroes read split ...passed 00:08:10.749 Test: blockdev write zeroes read split partial ...passed 00:08:10.749 Test: blockdev reset ...[2024-06-10 09:46:04.288393] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:10.749 [2024-06-10 09:46:04.292204] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.749 passed 00:08:10.749 Test: blockdev write read 8 blocks ...passed 00:08:10.749 Test: blockdev write read size > 128k ...passed 00:08:10.749 Test: blockdev write read invalid size ...passed 00:08:10.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:10.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:10.749 Test: blockdev write read max offset ...passed 00:08:10.749 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:10.749 Test: blockdev writev readv 8 blocks ...passed 00:08:10.749 Test: blockdev writev readv 30 x 1block ...passed 00:08:10.749 Test: blockdev writev readv block ...passed 00:08:10.749 Test: blockdev writev readv size > 128k ...passed 00:08:10.749 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:10.749 Test: blockdev comparev and writev ...[2024-06-10 09:46:04.301706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26880e000 len:0x1000 00:08:10.749 passed 00:08:10.749 Test: blockdev nvme passthru rw ...[2024-06-10 09:46:04.302042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:10.749 passed 00:08:10.749 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:46:04.302974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:10.749 [2024-06-10 09:46:04.303099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:10.749 passed 00:08:10.749 Test: blockdev nvme admin passthru ...passed 00:08:10.749 Test: blockdev copy ...passed 00:08:10.749 Suite: bdevio tests on: Nvme2n3 00:08:10.749 Test: blockdev write read block ...passed 00:08:10.749 Test: blockdev write zeroes read block ...passed 00:08:10.749 Test: blockdev write zeroes read no split ...passed 00:08:10.749 Test: blockdev write zeroes read split ...passed 00:08:10.749 Test: blockdev write zeroes read split partial ...passed 00:08:10.750 Test: blockdev reset ...[2024-06-10 09:46:04.374364] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:10.750 [2024-06-10 09:46:04.378256] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.750 passed 00:08:10.750 Test: blockdev write read 8 blocks ...passed 00:08:10.750 Test: blockdev write read size > 128k ...passed 00:08:10.750 Test: blockdev write read invalid size ...passed 00:08:10.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:10.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:10.750 Test: blockdev write read max offset ...passed 00:08:10.750 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:10.750 Test: blockdev writev readv 8 blocks ...passed 00:08:10.750 Test: blockdev writev readv 30 x 1block ...passed 00:08:10.750 Test: blockdev writev readv block ...passed 00:08:10.750 Test: blockdev writev readv size > 128k ...passed 00:08:10.750 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:10.750 Test: blockdev comparev and writev ...[2024-06-10 09:46:04.387401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26880a000 len:0x1000 00:08:10.750 passed 00:08:10.750 Test: blockdev nvme passthru rw ...[2024-06-10 09:46:04.387655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:10.750 passed 00:08:10.750 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:46:04.388826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:10.750 passed 00:08:10.750 Test: blockdev nvme admin passthru ...[2024-06-10 09:46:04.389090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:10.750 passed 00:08:10.750 Test: blockdev copy ...passed 00:08:10.750 Suite: bdevio tests on: Nvme2n2 00:08:10.750 Test: blockdev write read block ...passed 00:08:10.750 Test: blockdev write zeroes read block ...passed 00:08:10.750 Test: blockdev write zeroes read no split ...passed 00:08:10.750 Test: blockdev write zeroes read split ...passed 00:08:10.750 Test: blockdev write zeroes read split partial ...passed 00:08:10.750 Test: blockdev reset ...[2024-06-10 09:46:04.462988] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:10.750 [2024-06-10 09:46:04.466624] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.750 passed 00:08:10.750 Test: blockdev write read 8 blocks ...passed 00:08:10.750 Test: blockdev write read size > 128k ...passed 00:08:10.750 Test: blockdev write read invalid size ...passed 00:08:10.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:10.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:10.750 Test: blockdev write read max offset ...passed 00:08:10.750 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:10.750 Test: blockdev writev readv 8 blocks ...passed 00:08:10.750 Test: blockdev writev readv 30 x 1block ...passed 00:08:10.750 Test: blockdev writev readv block ...passed 00:08:10.750 Test: blockdev writev readv size > 128k ...passed 00:08:10.750 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:10.750 Test: blockdev comparev and writev ...[2024-06-10 09:46:04.477191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f806000 len:0x1000 00:08:10.750 [2024-06-10 09:46:04.477504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:10.750 passed 00:08:10.750 Test: blockdev nvme passthru rw ...passed 00:08:10.750 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:46:04.478964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:10.750 [2024-06-10 09:46:04.479231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:10.750 passed 00:08:10.750 Test: blockdev nvme admin passthru ...passed 00:08:10.750 Test: blockdev copy ...passed 00:08:10.750 Suite: bdevio tests on: Nvme2n1 00:08:10.750 Test: blockdev write read block ...passed 00:08:10.750 Test: blockdev write zeroes read block ...passed 00:08:10.750 Test: blockdev write zeroes read no split ...passed 00:08:11.009 Test: blockdev write zeroes read split ...passed 00:08:11.009 Test: blockdev write zeroes read split partial ...passed 00:08:11.009 Test: blockdev reset ...[2024-06-10 09:46:04.549163] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:11.009 [2024-06-10 09:46:04.552781] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:11.009 passed 00:08:11.009 Test: blockdev write read 8 blocks ...passed 00:08:11.009 Test: blockdev write read size > 128k ...passed 00:08:11.009 Test: blockdev write read invalid size ...passed 00:08:11.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:11.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:11.009 Test: blockdev write read max offset ...passed 00:08:11.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:11.009 Test: blockdev writev readv 8 blocks ...passed 00:08:11.009 Test: blockdev writev readv 30 x 1block ...passed 00:08:11.009 Test: blockdev writev readv block ...passed 00:08:11.009 Test: blockdev writev readv size > 128k ...passed 00:08:11.009 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:11.009 Test: blockdev comparev and writev ...[2024-06-10 09:46:04.561844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f801000 len:0x1000 00:08:11.009 passed 00:08:11.009 Test: blockdev nvme passthru rw ...[2024-06-10 09:46:04.562174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:11.009 passed 00:08:11.009 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:46:04.563006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:11.009 passed 00:08:11.009 Test: blockdev nvme admin passthru ...[2024-06-10 09:46:04.563284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:11.009 passed 00:08:11.009 Test: blockdev copy ...passed 00:08:11.009 Suite: bdevio tests on: Nvme1n1 00:08:11.009 Test: blockdev write read block ...passed 00:08:11.009 Test: blockdev write zeroes read block ...passed 00:08:11.009 Test: blockdev write zeroes read no split ...passed 00:08:11.009 Test: blockdev write zeroes read split ...passed 00:08:11.009 Test: blockdev write zeroes read split partial ...passed 00:08:11.009 Test: blockdev reset ...[2024-06-10 09:46:04.634950] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:11.009 [2024-06-10 09:46:04.638424] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:11.009 passed 00:08:11.009 Test: blockdev write read 8 blocks ...passed 00:08:11.009 Test: blockdev write read size > 128k ...passed 00:08:11.009 Test: blockdev write read invalid size ...passed 00:08:11.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:11.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:11.009 Test: blockdev write read max offset ...passed 00:08:11.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:11.009 Test: blockdev writev readv 8 blocks ...passed 00:08:11.009 Test: blockdev writev readv 30 x 1block ...passed 00:08:11.009 Test: blockdev writev readv block ...passed 00:08:11.009 Test: blockdev writev readv size > 128k ...passed 00:08:11.009 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:11.009 Test: blockdev comparev and writev ...[2024-06-10 09:46:04.647733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26d406000 len:0x1000 00:08:11.009 passed 00:08:11.009 Test: blockdev nvme passthru rw ...[2024-06-10 09:46:04.647979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:11.009 passed 00:08:11.009 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:46:04.648941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:11.009 passed 00:08:11.009 Test: blockdev nvme admin passthru ...[2024-06-10 09:46:04.649184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:11.009 passed 00:08:11.009 Test: blockdev copy ...passed 00:08:11.009 Suite: bdevio tests on: Nvme0n1 00:08:11.009 Test: blockdev write read block ...passed 00:08:11.009 Test: blockdev write zeroes read block ...passed 00:08:11.009 Test: blockdev write zeroes read no split ...passed 00:08:11.009 Test: blockdev write zeroes read split ...passed 00:08:11.009 Test: blockdev write zeroes read split partial ...passed 00:08:11.009 Test: blockdev reset ...[2024-06-10 09:46:04.721149] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:11.009 [2024-06-10 09:46:04.724637] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:11.009 passed 00:08:11.009 Test: blockdev write read 8 blocks ...passed 00:08:11.009 Test: blockdev write read size > 128k ...passed 00:08:11.009 Test: blockdev write read invalid size ...passed 00:08:11.010 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:11.010 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:11.010 Test: blockdev write read max offset ...passed 00:08:11.010 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:11.010 Test: blockdev writev readv 8 blocks ...passed 00:08:11.010 Test: blockdev writev readv 30 x 1block ...passed 00:08:11.010 Test: blockdev writev readv block ...passed 00:08:11.010 Test: blockdev writev readv size > 128k ...passed 00:08:11.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:11.010 Test: blockdev comparev and writev ...passed 00:08:11.010 Test: blockdev nvme passthru rw ...[2024-06-10 09:46:04.733246] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:11.010 separate metadata which is not supported yet. 00:08:11.010 passed 00:08:11.010 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:46:04.733920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:11.010 passed 00:08:11.010 Test: blockdev nvme admin passthru ...[2024-06-10 09:46:04.734156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:11.010 passed 00:08:11.010 Test: blockdev copy ...passed 00:08:11.010 00:08:11.010 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.010 suites 6 6 n/a 0 0 00:08:11.010 tests 138 138 138 0 0 00:08:11.010 asserts 893 893 893 0 n/a 00:08:11.010 00:08:11.010 Elapsed time = 1.429 seconds 00:08:11.010 0 00:08:11.010 09:46:04 -- bdev/blockdev.sh@293 -- # killprocess 61429 00:08:11.010 09:46:04 -- common/autotest_common.sh@926 -- # '[' -z 61429 ']' 00:08:11.010 09:46:04 -- common/autotest_common.sh@930 -- # kill -0 61429 00:08:11.010 09:46:04 -- common/autotest_common.sh@931 -- # uname 00:08:11.010 09:46:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:11.010 09:46:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61429 00:08:11.269 09:46:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:11.269 09:46:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:11.269 09:46:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61429' 00:08:11.269 killing process with pid 61429 00:08:11.269 09:46:04 -- common/autotest_common.sh@945 -- # kill 61429 00:08:11.269 09:46:04 -- common/autotest_common.sh@950 -- # wait 61429 00:08:12.205 09:46:05 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:12.205 00:08:12.205 real 0m3.296s 00:08:12.205 user 0m8.772s 00:08:12.205 sys 0m0.363s 00:08:12.205 09:46:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.206 09:46:05 -- common/autotest_common.sh@10 -- # set +x 00:08:12.206 ************************************ 00:08:12.206 END TEST bdev_bounds 00:08:12.206 ************************************ 00:08:12.206 09:46:05 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:12.206 09:46:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:12.206 09:46:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.206 09:46:05 -- common/autotest_common.sh@10 -- # set +x 00:08:12.206 ************************************ 00:08:12.206 START TEST bdev_nbd 00:08:12.206 ************************************ 00:08:12.206 09:46:05 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:12.206 09:46:05 -- bdev/blockdev.sh@298 -- # uname -s 00:08:12.206 09:46:05 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:12.206 09:46:05 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.206 09:46:05 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.206 09:46:05 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:12.206 09:46:05 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:12.206 09:46:05 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:08:12.206 09:46:05 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:12.206 09:46:05 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:12.206 09:46:05 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:12.206 09:46:05 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:08:12.206 09:46:05 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:12.206 09:46:05 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:12.206 09:46:05 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:12.206 09:46:05 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:12.206 09:46:05 -- bdev/blockdev.sh@316 -- # nbd_pid=61496 00:08:12.206 09:46:05 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:12.206 09:46:05 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:12.206 09:46:05 -- bdev/blockdev.sh@318 -- # waitforlisten 61496 /var/tmp/spdk-nbd.sock 00:08:12.206 09:46:05 -- common/autotest_common.sh@819 -- # '[' -z 61496 ']' 00:08:12.206 09:46:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:12.206 09:46:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:12.206 09:46:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:12.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:12.206 09:46:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:12.206 09:46:05 -- common/autotest_common.sh@10 -- # set +x 00:08:12.206 [2024-06-10 09:46:05.832174] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:12.206 [2024-06-10 09:46:05.832340] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.465 [2024-06-10 09:46:06.004678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.465 [2024-06-10 09:46:06.177082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.841 09:46:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:13.841 09:46:07 -- common/autotest_common.sh@852 -- # return 0 00:08:13.841 09:46:07 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@24 -- # local i 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:13.841 09:46:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:14.100 09:46:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:14.100 09:46:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:14.100 09:46:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:14.100 09:46:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:14.100 09:46:07 -- common/autotest_common.sh@857 -- # local i 00:08:14.100 09:46:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:14.100 09:46:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:14.100 09:46:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:14.100 09:46:07 -- common/autotest_common.sh@861 -- # break 00:08:14.100 09:46:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:14.100 09:46:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:14.100 09:46:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.100 1+0 records in 00:08:14.100 1+0 records out 00:08:14.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549011 s, 7.5 MB/s 00:08:14.100 09:46:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.100 09:46:07 -- common/autotest_common.sh@874 -- # size=4096 00:08:14.100 09:46:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.100 09:46:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:14.100 09:46:07 -- common/autotest_common.sh@877 -- # return 0 00:08:14.100 09:46:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:14.100 09:46:07 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:14.100 09:46:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:14.358 09:46:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:14.359 09:46:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:14.359 09:46:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:14.359 09:46:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:14.359 09:46:08 -- common/autotest_common.sh@857 -- # local i 00:08:14.359 09:46:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:14.359 09:46:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:14.359 09:46:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:14.359 09:46:08 -- common/autotest_common.sh@861 -- # break 00:08:14.359 09:46:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:14.359 09:46:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:14.359 09:46:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.359 1+0 records in 00:08:14.359 1+0 records out 00:08:14.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583187 s, 7.0 MB/s 00:08:14.359 09:46:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.359 09:46:08 -- common/autotest_common.sh@874 -- # size=4096 00:08:14.359 09:46:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.359 09:46:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:14.359 09:46:08 -- common/autotest_common.sh@877 -- # return 0 00:08:14.359 09:46:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:14.359 09:46:08 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:14.359 09:46:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:14.618 09:46:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:14.618 09:46:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:14.618 09:46:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:14.618 09:46:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:08:14.618 09:46:08 -- common/autotest_common.sh@857 -- # local i 00:08:14.618 09:46:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:14.618 09:46:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:14.618 09:46:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:08:14.618 09:46:08 -- common/autotest_common.sh@861 -- # break 00:08:14.618 09:46:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:14.618 09:46:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:14.618 09:46:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.618 1+0 records in 00:08:14.618 1+0 records out 00:08:14.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796339 s, 5.1 MB/s 00:08:14.618 09:46:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.618 09:46:08 -- common/autotest_common.sh@874 -- # size=4096 00:08:14.618 09:46:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.618 09:46:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:14.618 09:46:08 -- common/autotest_common.sh@877 -- # return 0 00:08:14.618 09:46:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:14.618 09:46:08 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:14.618 09:46:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:14.877 09:46:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:14.877 09:46:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:14.877 09:46:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:14.877 09:46:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:08:14.877 09:46:08 -- common/autotest_common.sh@857 -- # local i 00:08:14.877 09:46:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:14.877 09:46:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:14.877 09:46:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:08:14.877 09:46:08 -- common/autotest_common.sh@861 -- # break 00:08:14.877 09:46:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:14.877 09:46:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:14.877 09:46:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.877 1+0 records in 00:08:14.877 1+0 records out 00:08:14.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937778 s, 4.4 MB/s 00:08:14.877 09:46:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.877 09:46:08 -- common/autotest_common.sh@874 -- # size=4096 00:08:14.877 09:46:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.877 09:46:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:14.877 09:46:08 -- common/autotest_common.sh@877 -- # return 0 00:08:14.877 09:46:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:14.877 09:46:08 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:14.877 09:46:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:15.136 09:46:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:15.136 09:46:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:15.136 09:46:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:15.136 09:46:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:08:15.136 09:46:08 -- common/autotest_common.sh@857 -- # local i 00:08:15.136 09:46:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:15.136 09:46:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:15.136 09:46:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:08:15.136 09:46:08 -- common/autotest_common.sh@861 -- # break 00:08:15.136 09:46:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:15.136 09:46:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:15.136 09:46:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:15.136 1+0 records in 00:08:15.136 1+0 records out 00:08:15.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000941381 s, 4.4 MB/s 00:08:15.136 09:46:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.136 09:46:08 -- common/autotest_common.sh@874 -- # size=4096 00:08:15.136 09:46:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.136 09:46:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:15.136 09:46:08 -- common/autotest_common.sh@877 -- # return 0 00:08:15.136 09:46:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:15.136 09:46:08 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:15.136 09:46:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:15.395 09:46:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:15.395 09:46:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:15.395 09:46:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:15.395 09:46:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:08:15.395 09:46:09 -- common/autotest_common.sh@857 -- # local i 00:08:15.395 09:46:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:15.395 09:46:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:15.395 09:46:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:08:15.395 09:46:09 -- common/autotest_common.sh@861 -- # break 00:08:15.395 09:46:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:15.395 09:46:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:15.395 09:46:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:15.395 1+0 records in 00:08:15.395 1+0 records out 00:08:15.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830655 s, 4.9 MB/s 00:08:15.395 09:46:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.395 09:46:09 -- common/autotest_common.sh@874 -- # size=4096 00:08:15.395 09:46:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.395 09:46:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:15.395 09:46:09 -- common/autotest_common.sh@877 -- # return 0 00:08:15.395 09:46:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:15.395 09:46:09 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:15.395 09:46:09 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:15.653 09:46:09 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:15.653 { 00:08:15.653 "nbd_device": "/dev/nbd0", 00:08:15.653 "bdev_name": "Nvme0n1" 00:08:15.653 }, 00:08:15.653 { 00:08:15.653 "nbd_device": "/dev/nbd1", 00:08:15.653 "bdev_name": "Nvme1n1" 00:08:15.654 }, 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd2", 00:08:15.654 "bdev_name": "Nvme2n1" 00:08:15.654 }, 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd3", 00:08:15.654 "bdev_name": "Nvme2n2" 00:08:15.654 }, 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd4", 00:08:15.654 "bdev_name": "Nvme2n3" 00:08:15.654 }, 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd5", 00:08:15.654 "bdev_name": "Nvme3n1" 00:08:15.654 } 00:08:15.654 ]' 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd0", 00:08:15.654 "bdev_name": "Nvme0n1" 00:08:15.654 }, 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd1", 00:08:15.654 "bdev_name": "Nvme1n1" 00:08:15.654 }, 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd2", 00:08:15.654 "bdev_name": "Nvme2n1" 00:08:15.654 }, 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd3", 00:08:15.654 "bdev_name": "Nvme2n2" 00:08:15.654 }, 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd4", 00:08:15.654 "bdev_name": "Nvme2n3" 00:08:15.654 }, 00:08:15.654 { 00:08:15.654 "nbd_device": "/dev/nbd5", 00:08:15.654 "bdev_name": "Nvme3n1" 00:08:15.654 } 00:08:15.654 ]' 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@51 -- # local i 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.654 09:46:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@41 -- # break 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.913 09:46:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@41 -- # break 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.171 09:46:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@41 -- # break 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.430 09:46:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:16.688 09:46:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:16.688 09:46:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:16.688 09:46:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:16.688 09:46:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.688 09:46:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.688 09:46:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:16.688 09:46:10 -- bdev/nbd_common.sh@41 -- # break 00:08:16.689 09:46:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.689 09:46:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.689 09:46:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:16.947 09:46:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:16.947 09:46:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:16.947 09:46:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:16.947 09:46:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.947 09:46:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.947 09:46:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:16.947 09:46:10 -- bdev/nbd_common.sh@41 -- # break 00:08:16.948 09:46:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.948 09:46:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.948 09:46:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@41 -- # break 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.207 09:46:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@65 -- # true 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@65 -- # count=0 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@122 -- # count=0 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@127 -- # return 0 00:08:17.466 09:46:11 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@12 -- # local i 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:17.466 09:46:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:17.724 /dev/nbd0 00:08:17.724 09:46:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:17.724 09:46:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:17.724 09:46:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:17.724 09:46:11 -- common/autotest_common.sh@857 -- # local i 00:08:17.724 09:46:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:17.724 09:46:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:17.724 09:46:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:17.724 09:46:11 -- common/autotest_common.sh@861 -- # break 00:08:17.724 09:46:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:17.724 09:46:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:17.724 09:46:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:17.724 1+0 records in 00:08:17.724 1+0 records out 00:08:17.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633647 s, 6.5 MB/s 00:08:17.724 09:46:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.724 09:46:11 -- common/autotest_common.sh@874 -- # size=4096 00:08:17.724 09:46:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.724 09:46:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:17.724 09:46:11 -- common/autotest_common.sh@877 -- # return 0 00:08:17.724 09:46:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:17.724 09:46:11 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:17.724 09:46:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:17.981 /dev/nbd1 00:08:17.981 09:46:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:17.981 09:46:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:17.981 09:46:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:17.981 09:46:11 -- common/autotest_common.sh@857 -- # local i 00:08:17.981 09:46:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:17.981 09:46:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:17.981 09:46:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:17.981 09:46:11 -- common/autotest_common.sh@861 -- # break 00:08:17.981 09:46:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:17.981 09:46:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:17.981 09:46:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:17.981 1+0 records in 00:08:17.981 1+0 records out 00:08:17.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438671 s, 9.3 MB/s 00:08:17.981 09:46:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.981 09:46:11 -- common/autotest_common.sh@874 -- # size=4096 00:08:17.981 09:46:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.981 09:46:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:17.981 09:46:11 -- common/autotest_common.sh@877 -- # return 0 00:08:17.981 09:46:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:17.981 09:46:11 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:17.981 09:46:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:18.238 /dev/nbd10 00:08:18.238 09:46:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:18.238 09:46:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:18.238 09:46:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:08:18.238 09:46:11 -- common/autotest_common.sh@857 -- # local i 00:08:18.238 09:46:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:18.238 09:46:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:18.238 09:46:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:08:18.238 09:46:11 -- common/autotest_common.sh@861 -- # break 00:08:18.238 09:46:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:18.238 09:46:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:18.238 09:46:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.238 1+0 records in 00:08:18.238 1+0 records out 00:08:18.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675999 s, 6.1 MB/s 00:08:18.238 09:46:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.238 09:46:11 -- common/autotest_common.sh@874 -- # size=4096 00:08:18.238 09:46:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.238 09:46:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:18.238 09:46:11 -- common/autotest_common.sh@877 -- # return 0 00:08:18.238 09:46:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.238 09:46:11 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:18.238 09:46:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:18.495 /dev/nbd11 00:08:18.495 09:46:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:18.495 09:46:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:18.495 09:46:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:08:18.495 09:46:12 -- common/autotest_common.sh@857 -- # local i 00:08:18.495 09:46:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:18.496 09:46:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:18.496 09:46:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:08:18.754 09:46:12 -- common/autotest_common.sh@861 -- # break 00:08:18.754 09:46:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:18.754 09:46:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:18.754 09:46:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.754 1+0 records in 00:08:18.754 1+0 records out 00:08:18.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805009 s, 5.1 MB/s 00:08:18.754 09:46:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.754 09:46:12 -- common/autotest_common.sh@874 -- # size=4096 00:08:18.754 09:46:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.754 09:46:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:18.754 09:46:12 -- common/autotest_common.sh@877 -- # return 0 00:08:18.754 09:46:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.754 09:46:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:18.754 09:46:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:19.012 /dev/nbd12 00:08:19.012 09:46:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:19.012 09:46:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:19.012 09:46:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:08:19.012 09:46:12 -- common/autotest_common.sh@857 -- # local i 00:08:19.012 09:46:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:19.012 09:46:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:19.012 09:46:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:08:19.012 09:46:12 -- common/autotest_common.sh@861 -- # break 00:08:19.012 09:46:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:19.012 09:46:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:19.012 09:46:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:19.012 1+0 records in 00:08:19.012 1+0 records out 00:08:19.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663111 s, 6.2 MB/s 00:08:19.012 09:46:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.012 09:46:12 -- common/autotest_common.sh@874 -- # size=4096 00:08:19.012 09:46:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.012 09:46:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:19.012 09:46:12 -- common/autotest_common.sh@877 -- # return 0 00:08:19.012 09:46:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:19.012 09:46:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:19.012 09:46:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:19.271 /dev/nbd13 00:08:19.271 09:46:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:19.271 09:46:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:19.271 09:46:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:08:19.271 09:46:12 -- common/autotest_common.sh@857 -- # local i 00:08:19.271 09:46:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:19.271 09:46:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:19.271 09:46:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:08:19.271 09:46:12 -- common/autotest_common.sh@861 -- # break 00:08:19.271 09:46:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:19.271 09:46:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:19.271 09:46:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:19.271 1+0 records in 00:08:19.271 1+0 records out 00:08:19.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138609 s, 3.0 MB/s 00:08:19.271 09:46:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.271 09:46:12 -- common/autotest_common.sh@874 -- # size=4096 00:08:19.271 09:46:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.271 09:46:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:19.271 09:46:12 -- common/autotest_common.sh@877 -- # return 0 00:08:19.271 09:46:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:19.271 09:46:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:19.271 09:46:12 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:19.271 09:46:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.271 09:46:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd0", 00:08:19.529 "bdev_name": "Nvme0n1" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd1", 00:08:19.529 "bdev_name": "Nvme1n1" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd10", 00:08:19.529 "bdev_name": "Nvme2n1" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd11", 00:08:19.529 "bdev_name": "Nvme2n2" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd12", 00:08:19.529 "bdev_name": "Nvme2n3" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd13", 00:08:19.529 "bdev_name": "Nvme3n1" 00:08:19.529 } 00:08:19.529 ]' 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd0", 00:08:19.529 "bdev_name": "Nvme0n1" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd1", 00:08:19.529 "bdev_name": "Nvme1n1" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd10", 00:08:19.529 "bdev_name": "Nvme2n1" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd11", 00:08:19.529 "bdev_name": "Nvme2n2" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd12", 00:08:19.529 "bdev_name": "Nvme2n3" 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "nbd_device": "/dev/nbd13", 00:08:19.529 "bdev_name": "Nvme3n1" 00:08:19.529 } 00:08:19.529 ]' 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:19.529 /dev/nbd1 00:08:19.529 /dev/nbd10 00:08:19.529 /dev/nbd11 00:08:19.529 /dev/nbd12 00:08:19.529 /dev/nbd13' 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:19.529 /dev/nbd1 00:08:19.529 /dev/nbd10 00:08:19.529 /dev/nbd11 00:08:19.529 /dev/nbd12 00:08:19.529 /dev/nbd13' 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@65 -- # count=6 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@66 -- # echo 6 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@95 -- # count=6 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:19.529 09:46:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:19.530 09:46:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:19.530 09:46:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:19.530 256+0 records in 00:08:19.530 256+0 records out 00:08:19.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00727071 s, 144 MB/s 00:08:19.530 09:46:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.530 09:46:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:19.787 256+0 records in 00:08:19.787 256+0 records out 00:08:19.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176073 s, 6.0 MB/s 00:08:19.787 09:46:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.787 09:46:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:19.787 256+0 records in 00:08:19.787 256+0 records out 00:08:19.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154416 s, 6.8 MB/s 00:08:19.787 09:46:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.787 09:46:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:20.045 256+0 records in 00:08:20.045 256+0 records out 00:08:20.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180274 s, 5.8 MB/s 00:08:20.045 09:46:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:20.045 09:46:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:20.304 256+0 records in 00:08:20.304 256+0 records out 00:08:20.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175304 s, 6.0 MB/s 00:08:20.304 09:46:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:20.304 09:46:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:20.304 256+0 records in 00:08:20.304 256+0 records out 00:08:20.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172538 s, 6.1 MB/s 00:08:20.304 09:46:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:20.304 09:46:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:20.563 256+0 records in 00:08:20.563 256+0 records out 00:08:20.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175664 s, 6.0 MB/s 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:20.563 09:46:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@51 -- # local i 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.564 09:46:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@41 -- # break 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.850 09:46:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@41 -- # break 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:21.130 09:46:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@41 -- # break 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:21.387 09:46:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@41 -- # break 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.645 09:46:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@41 -- # break 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@41 -- # break 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.904 09:46:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:22.162 09:46:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:22.162 09:46:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:22.162 09:46:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@65 -- # true 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@65 -- # count=0 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@104 -- # count=0 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@109 -- # return 0 00:08:22.421 09:46:15 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:22.421 09:46:15 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:22.680 malloc_lvol_verify 00:08:22.680 09:46:16 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:22.937 c2544c87-427f-4c0d-8834-5f3bc8fa5deb 00:08:22.937 09:46:16 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:22.937 ff4bc59c-4631-4e88-8d59-3ba5deefcf8a 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:23.196 /dev/nbd0 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:23.196 mke2fs 1.46.5 (30-Dec-2021) 00:08:23.196 Discarding device blocks: 0/4096 done 00:08:23.196 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:23.196 00:08:23.196 Allocating group tables: 0/1 done 00:08:23.196 Writing inode tables: 0/1 done 00:08:23.196 Creating journal (1024 blocks): done 00:08:23.196 Writing superblocks and filesystem accounting information: 0/1 done 00:08:23.196 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@51 -- # local i 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.196 09:46:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@41 -- # break 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:23.455 09:46:17 -- bdev/nbd_common.sh@147 -- # return 0 00:08:23.455 09:46:17 -- bdev/blockdev.sh@324 -- # killprocess 61496 00:08:23.455 09:46:17 -- common/autotest_common.sh@926 -- # '[' -z 61496 ']' 00:08:23.455 09:46:17 -- common/autotest_common.sh@930 -- # kill -0 61496 00:08:23.455 09:46:17 -- common/autotest_common.sh@931 -- # uname 00:08:23.455 09:46:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:23.455 09:46:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61496 00:08:23.455 09:46:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:23.455 09:46:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:23.455 killing process with pid 61496 00:08:23.455 09:46:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61496' 00:08:23.455 09:46:17 -- common/autotest_common.sh@945 -- # kill 61496 00:08:23.455 09:46:17 -- common/autotest_common.sh@950 -- # wait 61496 00:08:24.833 09:46:18 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:24.833 00:08:24.833 real 0m12.531s 00:08:24.833 user 0m17.597s 00:08:24.833 sys 0m3.706s 00:08:24.833 09:46:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.833 09:46:18 -- common/autotest_common.sh@10 -- # set +x 00:08:24.833 ************************************ 00:08:24.833 END TEST bdev_nbd 00:08:24.833 ************************************ 00:08:24.833 09:46:18 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:24.833 09:46:18 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:08:24.833 skipping fio tests on NVMe due to multi-ns failures. 00:08:24.833 09:46:18 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:24.833 09:46:18 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:24.833 09:46:18 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:24.833 09:46:18 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:24.833 09:46:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.833 09:46:18 -- common/autotest_common.sh@10 -- # set +x 00:08:24.833 ************************************ 00:08:24.833 START TEST bdev_verify 00:08:24.833 ************************************ 00:08:24.833 09:46:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:24.833 [2024-06-10 09:46:18.395123] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:24.833 [2024-06-10 09:46:18.395300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61905 ] 00:08:24.833 [2024-06-10 09:46:18.564192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.092 [2024-06-10 09:46:18.728858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.092 [2024-06-10 09:46:18.728872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.657 Running I/O for 5 seconds... 00:08:30.924 00:08:30.924 Latency(us) 00:08:30.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.924 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x0 length 0xbd0bd 00:08:30.924 Nvme0n1 : 5.04 2846.92 11.12 0.00 0.00 44836.57 7447.27 49569.05 00:08:30.924 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:30.924 Nvme0n1 : 5.04 2826.72 11.04 0.00 0.00 45135.51 7745.16 56003.49 00:08:30.924 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x0 length 0xa0000 00:08:30.924 Nvme1n1 : 5.04 2845.75 11.12 0.00 0.00 44823.48 7923.90 47424.23 00:08:30.924 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0xa0000 length 0xa0000 00:08:30.924 Nvme1n1 : 5.05 2832.34 11.06 0.00 0.00 45021.56 4468.36 50998.92 00:08:30.924 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x0 length 0x80000 00:08:30.924 Nvme2n1 : 5.04 2844.67 11.11 0.00 0.00 44803.41 8757.99 46947.61 00:08:30.924 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x80000 length 0x80000 00:08:30.924 Nvme2n1 : 5.05 2830.92 11.06 0.00 0.00 44931.71 6047.19 42181.35 00:08:30.924 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x0 length 0x80000 00:08:30.924 Nvme2n2 : 5.05 2849.95 11.13 0.00 0.00 44659.80 2710.81 38368.35 00:08:30.924 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x80000 length 0x80000 00:08:30.924 Nvme2n2 : 5.05 2829.46 11.05 0.00 0.00 44889.14 7536.64 40989.79 00:08:30.924 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x0 length 0x80000 00:08:30.924 Nvme2n3 : 5.05 2848.46 11.13 0.00 0.00 44637.99 4289.63 36938.47 00:08:30.924 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x80000 length 0x80000 00:08:30.924 Nvme2n3 : 5.06 2836.56 11.08 0.00 0.00 44770.45 1846.92 40513.16 00:08:30.924 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x0 length 0x20000 00:08:30.924 Nvme3n1 : 5.05 2852.16 11.14 0.00 0.00 44555.01 2010.76 36700.16 00:08:30.924 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.924 Verification LBA range: start 0x20000 length 0x20000 00:08:30.924 Nvme3n1 : 5.06 2835.81 11.08 0.00 0.00 44744.17 2234.18 40274.85 00:08:30.924 =================================================================================================================== 00:08:30.924 Total : 34079.71 133.12 0.00 0.00 44816.93 1846.92 56003.49 00:08:39.040 00:08:39.040 real 0m14.400s 00:08:39.040 user 0m27.453s 00:08:39.040 sys 0m0.309s 00:08:39.040 09:46:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.040 ************************************ 00:08:39.040 END TEST bdev_verify 00:08:39.040 ************************************ 00:08:39.040 09:46:32 -- common/autotest_common.sh@10 -- # set +x 00:08:39.040 09:46:32 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:39.040 09:46:32 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:39.040 09:46:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.040 09:46:32 -- common/autotest_common.sh@10 -- # set +x 00:08:39.040 ************************************ 00:08:39.040 START TEST bdev_verify_big_io 00:08:39.040 ************************************ 00:08:39.040 09:46:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:39.311 [2024-06-10 09:46:32.835670] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:39.311 [2024-06-10 09:46:32.835811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62050 ] 00:08:39.311 [2024-06-10 09:46:32.998294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:39.570 [2024-06-10 09:46:33.163844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.570 [2024-06-10 09:46:33.163849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.136 Running I/O for 5 seconds... 00:08:46.702 00:08:46.702 Latency(us) 00:08:46.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.703 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x0 length 0xbd0b 00:08:46.703 Nvme0n1 : 5.38 232.85 14.55 0.00 0.00 536405.62 54811.93 785478.75 00:08:46.703 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:46.703 Nvme0n1 : 5.32 261.39 16.34 0.00 0.00 478109.47 79119.83 571950.55 00:08:46.703 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x0 length 0xa000 00:08:46.703 Nvme1n1 : 5.38 232.77 14.55 0.00 0.00 527630.84 55050.24 713031.68 00:08:46.703 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0xa000 length 0xa000 00:08:46.703 Nvme1n1 : 5.36 266.90 16.68 0.00 0.00 465158.61 33363.78 522381.50 00:08:46.703 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x0 length 0x8000 00:08:46.703 Nvme2n1 : 5.40 240.11 15.01 0.00 0.00 507478.68 15847.80 648210.62 00:08:46.703 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x8000 length 0x8000 00:08:46.703 Nvme2n1 : 5.36 266.81 16.68 0.00 0.00 459542.10 33602.09 480438.46 00:08:46.703 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x0 length 0x8000 00:08:46.703 Nvme2n2 : 5.42 248.84 15.55 0.00 0.00 480535.66 16801.05 579576.55 00:08:46.703 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x8000 length 0x8000 00:08:46.703 Nvme2n2 : 5.37 275.43 17.21 0.00 0.00 442994.02 3813.00 442308.42 00:08:46.703 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x0 length 0x8000 00:08:46.703 Nvme2n3 : 5.47 270.28 16.89 0.00 0.00 433252.56 11975.21 457560.44 00:08:46.703 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x8000 length 0x8000 00:08:46.703 Nvme2n3 : 5.37 275.33 17.21 0.00 0.00 438236.15 4408.79 407991.39 00:08:46.703 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x0 length 0x2000 00:08:46.703 Nvme3n1 : 5.50 310.22 19.39 0.00 0.00 373093.24 636.74 453747.43 00:08:46.703 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.703 Verification LBA range: start 0x2000 length 0x2000 00:08:46.703 Nvme3n1 : 5.38 283.51 17.72 0.00 0.00 422037.80 4557.73 407991.39 00:08:46.703 =================================================================================================================== 00:08:46.703 Total : 3164.45 197.78 0.00 0.00 459919.95 636.74 785478.75 00:08:47.270 00:08:47.270 real 0m8.207s 00:08:47.270 user 0m15.217s 00:08:47.270 sys 0m0.257s 00:08:47.270 09:46:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.270 09:46:40 -- common/autotest_common.sh@10 -- # set +x 00:08:47.270 ************************************ 00:08:47.270 END TEST bdev_verify_big_io 00:08:47.270 ************************************ 00:08:47.270 09:46:41 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:47.270 09:46:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:47.270 09:46:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.270 09:46:41 -- common/autotest_common.sh@10 -- # set +x 00:08:47.270 ************************************ 00:08:47.270 START TEST bdev_write_zeroes 00:08:47.270 ************************************ 00:08:47.270 09:46:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:47.529 [2024-06-10 09:46:41.111901] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:47.529 [2024-06-10 09:46:41.112080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62155 ] 00:08:47.529 [2024-06-10 09:46:41.281189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.788 [2024-06-10 09:46:41.454394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.355 Running I/O for 1 seconds... 00:08:49.727 00:08:49.727 Latency(us) 00:08:49.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.727 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:49.727 Nvme0n1 : 1.02 8976.26 35.06 0.00 0.00 14215.56 10724.07 27644.28 00:08:49.727 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:49.727 Nvme1n1 : 1.02 8961.80 35.01 0.00 0.00 14216.21 11498.59 28359.21 00:08:49.727 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:49.727 Nvme2n1 : 1.02 8948.44 34.95 0.00 0.00 14181.89 11319.85 26691.03 00:08:49.727 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:49.727 Nvme2n2 : 1.02 8935.11 34.90 0.00 0.00 14133.60 11558.17 23473.80 00:08:49.727 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:49.727 Nvme2n3 : 1.03 8975.42 35.06 0.00 0.00 14035.83 8996.31 18945.86 00:08:49.727 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:49.727 Nvme3n1 : 1.03 8962.13 35.01 0.00 0.00 14019.19 8400.52 18230.92 00:08:49.727 =================================================================================================================== 00:08:49.727 Total : 53759.17 210.00 0.00 0.00 14133.46 8400.52 28359.21 00:08:50.662 00:08:50.662 real 0m3.184s 00:08:50.662 user 0m2.839s 00:08:50.662 sys 0m0.223s 00:08:50.662 09:46:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.662 09:46:44 -- common/autotest_common.sh@10 -- # set +x 00:08:50.662 ************************************ 00:08:50.662 END TEST bdev_write_zeroes 00:08:50.662 ************************************ 00:08:50.662 09:46:44 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.662 09:46:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:50.662 09:46:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.662 09:46:44 -- common/autotest_common.sh@10 -- # set +x 00:08:50.662 ************************************ 00:08:50.662 START TEST bdev_json_nonenclosed 00:08:50.662 ************************************ 00:08:50.662 09:46:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.662 [2024-06-10 09:46:44.352341] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:50.662 [2024-06-10 09:46:44.352513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62208 ] 00:08:50.921 [2024-06-10 09:46:44.524065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.179 [2024-06-10 09:46:44.701199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.179 [2024-06-10 09:46:44.701453] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:51.179 [2024-06-10 09:46:44.701484] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.438 00:08:51.438 real 0m0.780s 00:08:51.438 user 0m0.544s 00:08:51.438 sys 0m0.129s 00:08:51.438 09:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.438 09:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:51.438 ************************************ 00:08:51.438 END TEST bdev_json_nonenclosed 00:08:51.438 ************************************ 00:08:51.438 09:46:45 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.438 09:46:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:51.438 09:46:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:51.438 09:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:51.438 ************************************ 00:08:51.438 START TEST bdev_json_nonarray 00:08:51.438 ************************************ 00:08:51.438 09:46:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.438 [2024-06-10 09:46:45.164495] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:51.438 [2024-06-10 09:46:45.164633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62239 ] 00:08:51.697 [2024-06-10 09:46:45.322396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.955 [2024-06-10 09:46:45.486900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.955 [2024-06-10 09:46:45.487101] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:51.955 [2024-06-10 09:46:45.487161] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.214 00:08:52.214 real 0m0.763s 00:08:52.214 user 0m0.549s 00:08:52.214 sys 0m0.109s 00:08:52.214 09:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.214 ************************************ 00:08:52.214 09:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:52.214 END TEST bdev_json_nonarray 00:08:52.214 ************************************ 00:08:52.214 09:46:45 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:52.214 09:46:45 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:52.214 09:46:45 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:52.214 09:46:45 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:52.214 09:46:45 -- bdev/blockdev.sh@809 -- # cleanup 00:08:52.214 09:46:45 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:52.214 09:46:45 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:52.214 09:46:45 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:52.214 09:46:45 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:52.214 09:46:45 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:52.214 09:46:45 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:52.214 00:08:52.214 real 0m49.925s 00:08:52.214 user 1m19.544s 00:08:52.214 sys 0m6.110s 00:08:52.214 09:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.214 09:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:52.214 ************************************ 00:08:52.214 END TEST blockdev_nvme 00:08:52.214 ************************************ 00:08:52.214 09:46:45 -- spdk/autotest.sh@219 -- # uname -s 00:08:52.214 09:46:45 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:08:52.214 09:46:45 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:52.214 09:46:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:52.214 09:46:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.214 09:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:52.214 ************************************ 00:08:52.214 START TEST blockdev_nvme_gpt 00:08:52.214 ************************************ 00:08:52.214 09:46:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:52.472 * Looking for test storage... 00:08:52.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:52.472 09:46:46 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:52.472 09:46:46 -- bdev/nbd_common.sh@6 -- # set -e 00:08:52.472 09:46:46 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:52.472 09:46:46 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:52.472 09:46:46 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:52.472 09:46:46 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:52.472 09:46:46 -- bdev/blockdev.sh@18 -- # : 00:08:52.472 09:46:46 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:52.472 09:46:46 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:52.472 09:46:46 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:52.472 09:46:46 -- bdev/blockdev.sh@672 -- # uname -s 00:08:52.472 09:46:46 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:52.472 09:46:46 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:52.472 09:46:46 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:52.472 09:46:46 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:52.472 09:46:46 -- bdev/blockdev.sh@682 -- # dek= 00:08:52.472 09:46:46 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:52.472 09:46:46 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:52.472 09:46:46 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:52.472 09:46:46 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:52.472 09:46:46 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:52.472 09:46:46 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:52.472 09:46:46 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62314 00:08:52.472 09:46:46 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:52.472 09:46:46 -- bdev/blockdev.sh@47 -- # waitforlisten 62314 00:08:52.472 09:46:46 -- common/autotest_common.sh@819 -- # '[' -z 62314 ']' 00:08:52.472 09:46:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.472 09:46:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:52.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.472 09:46:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.472 09:46:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:52.473 09:46:46 -- common/autotest_common.sh@10 -- # set +x 00:08:52.473 09:46:46 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:52.473 [2024-06-10 09:46:46.149209] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:52.473 [2024-06-10 09:46:46.149387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62314 ] 00:08:52.731 [2024-06-10 09:46:46.319136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.731 [2024-06-10 09:46:46.480445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.731 [2024-06-10 09:46:46.480669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.105 09:46:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:54.105 09:46:47 -- common/autotest_common.sh@852 -- # return 0 00:08:54.105 09:46:47 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:54.105 09:46:47 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:54.105 09:46:47 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:54.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.670 Waiting for block devices as requested 00:08:54.670 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.670 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.933 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.933 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.204 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:00.204 09:46:53 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:09:00.204 09:46:53 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:09:00.204 09:46:53 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:09:00.204 09:46:53 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:09:00.204 09:46:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:00.204 09:46:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:00.204 09:46:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:00.204 09:46:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:00.204 09:46:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:09:00.204 09:46:53 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:09:00.204 09:46:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:00.204 09:46:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:09:00.204 09:46:53 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:09:00.204 09:46:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:00.204 09:46:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:00.204 09:46:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:09:00.204 09:46:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:00.204 09:46:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:00.204 09:46:53 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:09:00.204 09:46:53 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:09:00.204 09:46:53 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:09:00.204 09:46:53 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:00.204 09:46:53 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:09:00.204 09:46:53 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:09:00.204 09:46:53 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:09:00.204 09:46:53 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:09:00.204 BYT; 00:09:00.204 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:00.204 09:46:53 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:09:00.204 BYT; 00:09:00.204 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:00.204 09:46:53 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:09:00.204 09:46:53 -- bdev/blockdev.sh@114 -- # break 00:09:00.204 09:46:53 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:09:00.204 09:46:53 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:00.204 09:46:53 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:00.204 09:46:53 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:00.204 09:46:53 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:09:00.204 09:46:53 -- scripts/common.sh@410 -- # local spdk_guid 00:09:00.204 09:46:53 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:00.204 09:46:53 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:00.204 09:46:53 -- scripts/common.sh@415 -- # IFS='()' 00:09:00.204 09:46:53 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:09:00.204 09:46:53 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:00.204 09:46:53 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:00.204 09:46:53 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:00.204 09:46:53 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:00.204 09:46:53 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:00.204 09:46:53 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:09:00.204 09:46:53 -- scripts/common.sh@422 -- # local spdk_guid 00:09:00.204 09:46:53 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:00.204 09:46:53 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:00.204 09:46:53 -- scripts/common.sh@427 -- # IFS='()' 00:09:00.204 09:46:53 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:09:00.204 09:46:53 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:00.204 09:46:53 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:00.204 09:46:53 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:00.204 09:46:53 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:00.204 09:46:53 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:00.204 09:46:53 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:09:01.139 The operation has completed successfully. 00:09:01.139 09:46:54 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:09:02.073 The operation has completed successfully. 00:09:02.073 09:46:55 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:03.007 lsblk: /dev/nvme0c0n1: not a block device 00:09:03.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:03.266 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.524 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.524 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.524 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.524 09:46:57 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:09:03.524 09:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.524 09:46:57 -- common/autotest_common.sh@10 -- # set +x 00:09:03.524 [] 00:09:03.524 09:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.524 09:46:57 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:09:03.524 09:46:57 -- bdev/blockdev.sh@79 -- # local json 00:09:03.524 09:46:57 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:03.524 09:46:57 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:03.524 09:46:57 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:03.524 09:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.524 09:46:57 -- common/autotest_common.sh@10 -- # set +x 00:09:03.782 09:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.782 09:46:57 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:03.782 09:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.782 09:46:57 -- common/autotest_common.sh@10 -- # set +x 00:09:03.782 09:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.782 09:46:57 -- bdev/blockdev.sh@738 -- # cat 00:09:03.782 09:46:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:03.782 09:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.782 09:46:57 -- common/autotest_common.sh@10 -- # set +x 00:09:03.782 09:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.782 09:46:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:03.782 09:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.782 09:46:57 -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 09:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.042 09:46:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:04.042 09:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.042 09:46:57 -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 09:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.042 09:46:57 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:04.042 09:46:57 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:04.042 09:46:57 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:04.042 09:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.042 09:46:57 -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 09:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.042 09:46:57 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:04.042 09:46:57 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:04.043 09:46:57 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f8f9f01b-ba45-49e8-aa7a-0d53d1794c3e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f8f9f01b-ba45-49e8-aa7a-0d53d1794c3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c2aa9ca0-cbef-4014-9217-7769a926b4c9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c2aa9ca0-cbef-4014-9217-7769a926b4c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bcebb118-f63e-4359-a1ea-1386a802d104"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bcebb118-f63e-4359-a1ea-1386a802d104",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c49af611-0025-4ed5-8e96-bbe0d11ba815"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c49af611-0025-4ed5-8e96-bbe0d11ba815",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9e47a21c-dff6-4184-848b-e8a2eb0ba0f5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9e47a21c-dff6-4184-848b-e8a2eb0ba0f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:04.043 09:46:57 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:04.043 09:46:57 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:09:04.043 09:46:57 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:04.043 09:46:57 -- bdev/blockdev.sh@752 -- # killprocess 62314 00:09:04.043 09:46:57 -- common/autotest_common.sh@926 -- # '[' -z 62314 ']' 00:09:04.043 09:46:57 -- common/autotest_common.sh@930 -- # kill -0 62314 00:09:04.043 09:46:57 -- common/autotest_common.sh@931 -- # uname 00:09:04.043 09:46:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:04.043 09:46:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62314 00:09:04.043 09:46:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:04.043 09:46:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:04.043 killing process with pid 62314 00:09:04.043 09:46:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62314' 00:09:04.043 09:46:57 -- common/autotest_common.sh@945 -- # kill 62314 00:09:04.043 09:46:57 -- common/autotest_common.sh@950 -- # wait 62314 00:09:05.945 09:46:59 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:05.945 09:46:59 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:05.945 09:46:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:05.945 09:46:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.945 09:46:59 -- common/autotest_common.sh@10 -- # set +x 00:09:05.945 ************************************ 00:09:05.945 START TEST bdev_hello_world 00:09:05.945 ************************************ 00:09:05.945 09:46:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:06.203 [2024-06-10 09:46:59.713676] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:06.203 [2024-06-10 09:46:59.713854] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62999 ] 00:09:06.203 [2024-06-10 09:46:59.883008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.462 [2024-06-10 09:47:00.055876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.029 [2024-06-10 09:47:00.609628] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:07.029 [2024-06-10 09:47:00.609683] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:07.029 [2024-06-10 09:47:00.609709] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:07.029 [2024-06-10 09:47:00.612601] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:07.029 [2024-06-10 09:47:00.613678] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:07.029 [2024-06-10 09:47:00.613720] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:07.029 [2024-06-10 09:47:00.613975] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:07.029 00:09:07.029 [2024-06-10 09:47:00.614014] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:07.965 00:09:07.965 real 0m1.983s 00:09:07.965 user 0m1.664s 00:09:07.965 sys 0m0.207s 00:09:07.965 09:47:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.965 09:47:01 -- common/autotest_common.sh@10 -- # set +x 00:09:07.965 ************************************ 00:09:07.965 END TEST bdev_hello_world 00:09:07.965 ************************************ 00:09:07.965 09:47:01 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:07.965 09:47:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:07.965 09:47:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.965 09:47:01 -- common/autotest_common.sh@10 -- # set +x 00:09:07.965 ************************************ 00:09:07.965 START TEST bdev_bounds 00:09:07.965 ************************************ 00:09:07.965 09:47:01 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:09:07.965 09:47:01 -- bdev/blockdev.sh@288 -- # bdevio_pid=63041 00:09:07.965 09:47:01 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:07.965 09:47:01 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:07.965 Process bdevio pid: 63041 00:09:07.965 09:47:01 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 63041' 00:09:07.965 09:47:01 -- bdev/blockdev.sh@291 -- # waitforlisten 63041 00:09:07.965 09:47:01 -- common/autotest_common.sh@819 -- # '[' -z 63041 ']' 00:09:07.965 09:47:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.965 09:47:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:07.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.965 09:47:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.965 09:47:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:07.965 09:47:01 -- common/autotest_common.sh@10 -- # set +x 00:09:08.223 [2024-06-10 09:47:01.757973] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:08.223 [2024-06-10 09:47:01.758155] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63041 ] 00:09:08.223 [2024-06-10 09:47:01.930596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:08.482 [2024-06-10 09:47:02.107699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.482 [2024-06-10 09:47:02.107806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.482 [2024-06-10 09:47:02.107816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.859 09:47:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:09.859 09:47:03 -- common/autotest_common.sh@852 -- # return 0 00:09:09.859 09:47:03 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:09.859 I/O targets: 00:09:09.859 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:09.859 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:09.859 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:09.859 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:09.859 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:09.859 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:09.859 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:09.859 00:09:09.859 00:09:09.859 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.859 http://cunit.sourceforge.net/ 00:09:09.859 00:09:09.859 00:09:09.859 Suite: bdevio tests on: Nvme3n1 00:09:09.859 Test: blockdev write read block ...passed 00:09:09.859 Test: blockdev write zeroes read block ...passed 00:09:09.859 Test: blockdev write zeroes read no split ...passed 00:09:09.859 Test: blockdev write zeroes read split ...passed 00:09:09.859 Test: blockdev write zeroes read split partial ...passed 00:09:09.859 Test: blockdev reset ...[2024-06-10 09:47:03.579089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:09.859 [2024-06-10 09:47:03.582890] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:09.859 passed 00:09:09.859 Test: blockdev write read 8 blocks ...passed 00:09:09.859 Test: blockdev write read size > 128k ...passed 00:09:09.859 Test: blockdev write read invalid size ...passed 00:09:09.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:09.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:09.859 Test: blockdev write read max offset ...passed 00:09:09.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:09.859 Test: blockdev writev readv 8 blocks ...passed 00:09:09.859 Test: blockdev writev readv 30 x 1block ...passed 00:09:09.859 Test: blockdev writev readv block ...passed 00:09:09.859 Test: blockdev writev readv size > 128k ...passed 00:09:09.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:09.859 Test: blockdev comparev and writev ...[2024-06-10 09:47:03.591688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f20a000 len:0x1000 00:09:09.859 [2024-06-10 09:47:03.591948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:09.859 passed 00:09:09.859 Test: blockdev nvme passthru rw ...passed 00:09:09.859 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:47:03.593166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:09.859 [2024-06-10 09:47:03.593342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:09.859 passed 00:09:09.859 Test: blockdev nvme admin passthru ...passed 00:09:09.859 Test: blockdev copy ...passed 00:09:09.859 Suite: bdevio tests on: Nvme2n3 00:09:09.859 Test: blockdev write read block ...passed 00:09:09.859 Test: blockdev write zeroes read block ...passed 00:09:09.859 Test: blockdev write zeroes read no split ...passed 00:09:10.118 Test: blockdev write zeroes read split ...passed 00:09:10.118 Test: blockdev write zeroes read split partial ...passed 00:09:10.118 Test: blockdev reset ...[2024-06-10 09:47:03.658794] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:10.118 [2024-06-10 09:47:03.662918] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.118 passed 00:09:10.118 Test: blockdev write read 8 blocks ...passed 00:09:10.118 Test: blockdev write read size > 128k ...passed 00:09:10.118 Test: blockdev write read invalid size ...passed 00:09:10.118 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.118 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.118 Test: blockdev write read max offset ...passed 00:09:10.118 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.118 Test: blockdev writev readv 8 blocks ...passed 00:09:10.118 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.118 Test: blockdev writev readv block ...passed 00:09:10.118 Test: blockdev writev readv size > 128k ...passed 00:09:10.118 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.118 Test: blockdev comparev and writev ...[2024-06-10 09:47:03.670884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276d04000 len:0x1000 00:09:10.118 [2024-06-10 09:47:03.670943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.118 passed 00:09:10.118 Test: blockdev nvme passthru rw ...passed 00:09:10.118 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:47:03.671700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.118 [2024-06-10 09:47:03.671745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.118 passed 00:09:10.118 Test: blockdev nvme admin passthru ...passed 00:09:10.118 Test: blockdev copy ...passed 00:09:10.118 Suite: bdevio tests on: Nvme2n2 00:09:10.118 Test: blockdev write read block ...passed 00:09:10.118 Test: blockdev write zeroes read block ...passed 00:09:10.118 Test: blockdev write zeroes read no split ...passed 00:09:10.118 Test: blockdev write zeroes read split ...passed 00:09:10.118 Test: blockdev write zeroes read split partial ...passed 00:09:10.118 Test: blockdev reset ...[2024-06-10 09:47:03.735244] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:10.118 [2024-06-10 09:47:03.739009] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.118 passed 00:09:10.118 Test: blockdev write read 8 blocks ...passed 00:09:10.118 Test: blockdev write read size > 128k ...passed 00:09:10.118 Test: blockdev write read invalid size ...passed 00:09:10.118 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.118 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.118 Test: blockdev write read max offset ...passed 00:09:10.118 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.118 Test: blockdev writev readv 8 blocks ...passed 00:09:10.118 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.119 Test: blockdev writev readv block ...passed 00:09:10.119 Test: blockdev writev readv size > 128k ...passed 00:09:10.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.119 Test: blockdev comparev and writev ...[2024-06-10 09:47:03.747066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276d04000 len:0x1000 00:09:10.119 [2024-06-10 09:47:03.747134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.119 passed 00:09:10.119 Test: blockdev nvme passthru rw ...passed 00:09:10.119 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:47:03.747912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.119 [2024-06-10 09:47:03.747956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.119 passed 00:09:10.119 Test: blockdev nvme admin passthru ...passed 00:09:10.119 Test: blockdev copy ...passed 00:09:10.119 Suite: bdevio tests on: Nvme2n1 00:09:10.119 Test: blockdev write read block ...passed 00:09:10.119 Test: blockdev write zeroes read block ...passed 00:09:10.119 Test: blockdev write zeroes read no split ...passed 00:09:10.119 Test: blockdev write zeroes read split ...passed 00:09:10.119 Test: blockdev write zeroes read split partial ...passed 00:09:10.119 Test: blockdev reset ...[2024-06-10 09:47:03.807694] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:10.119 [2024-06-10 09:47:03.811385] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.119 passed 00:09:10.119 Test: blockdev write read 8 blocks ...passed 00:09:10.119 Test: blockdev write read size > 128k ...passed 00:09:10.119 Test: blockdev write read invalid size ...passed 00:09:10.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.119 Test: blockdev write read max offset ...passed 00:09:10.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.119 Test: blockdev writev readv 8 blocks ...passed 00:09:10.119 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.119 Test: blockdev writev readv block ...passed 00:09:10.119 Test: blockdev writev readv size > 128k ...passed 00:09:10.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.119 Test: blockdev comparev and writev ...[2024-06-10 09:47:03.819528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27f03c000 len:0x1000 00:09:10.119 [2024-06-10 09:47:03.819584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.119 passed 00:09:10.119 Test: blockdev nvme passthru rw ...passed 00:09:10.119 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:47:03.820338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.119 [2024-06-10 09:47:03.820389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.119 passed 00:09:10.119 Test: blockdev nvme admin passthru ...passed 00:09:10.119 Test: blockdev copy ...passed 00:09:10.119 Suite: bdevio tests on: Nvme1n1 00:09:10.119 Test: blockdev write read block ...passed 00:09:10.119 Test: blockdev write zeroes read block ...passed 00:09:10.119 Test: blockdev write zeroes read no split ...passed 00:09:10.119 Test: blockdev write zeroes read split ...passed 00:09:10.119 Test: blockdev write zeroes read split partial ...passed 00:09:10.387 Test: blockdev reset ...[2024-06-10 09:47:03.883349] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:10.387 [2024-06-10 09:47:03.886848] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.387 passed 00:09:10.387 Test: blockdev write read 8 blocks ...passed 00:09:10.387 Test: blockdev write read size > 128k ...passed 00:09:10.387 Test: blockdev write read invalid size ...passed 00:09:10.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.387 Test: blockdev write read max offset ...passed 00:09:10.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.387 Test: blockdev writev readv 8 blocks ...passed 00:09:10.387 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.387 Test: blockdev writev readv block ...passed 00:09:10.387 Test: blockdev writev readv size > 128k ...passed 00:09:10.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.387 Test: blockdev comparev and writev ...[2024-06-10 09:47:03.895202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27f038000 len:0x1000 00:09:10.387 [2024-06-10 09:47:03.895257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.387 passed 00:09:10.388 Test: blockdev nvme passthru rw ...passed 00:09:10.388 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:47:03.896068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.388 [2024-06-10 09:47:03.896122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.388 passed 00:09:10.388 Test: blockdev nvme admin passthru ...passed 00:09:10.388 Test: blockdev copy ...passed 00:09:10.388 Suite: bdevio tests on: Nvme0n1p2 00:09:10.388 Test: blockdev write read block ...passed 00:09:10.388 Test: blockdev write zeroes read block ...passed 00:09:10.388 Test: blockdev write zeroes read no split ...passed 00:09:10.388 Test: blockdev write zeroes read split ...passed 00:09:10.388 Test: blockdev write zeroes read split partial ...passed 00:09:10.388 Test: blockdev reset ...[2024-06-10 09:47:03.960202] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:10.388 [2024-06-10 09:47:03.963594] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.388 passed 00:09:10.388 Test: blockdev write read 8 blocks ...passed 00:09:10.388 Test: blockdev write read size > 128k ...passed 00:09:10.388 Test: blockdev write read invalid size ...passed 00:09:10.388 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.388 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.388 Test: blockdev write read max offset ...passed 00:09:10.388 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.388 Test: blockdev writev readv 8 blocks ...passed 00:09:10.388 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.388 Test: blockdev writev readv block ...passed 00:09:10.388 Test: blockdev writev readv size > 128k ...passed 00:09:10.388 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.388 Test: blockdev comparev and writev ...passed 00:09:10.388 Test: blockdev nvme passthru rw ...passed 00:09:10.388 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.388 Test: blockdev nvme admin passthru ...passed 00:09:10.388 Test: blockdev copy ...[2024-06-10 09:47:03.971072] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:10.388 separate metadata which is not supported yet. 00:09:10.388 passed 00:09:10.388 Suite: bdevio tests on: Nvme0n1p1 00:09:10.388 Test: blockdev write read block ...passed 00:09:10.388 Test: blockdev write zeroes read block ...passed 00:09:10.388 Test: blockdev write zeroes read no split ...passed 00:09:10.388 Test: blockdev write zeroes read split ...passed 00:09:10.388 Test: blockdev write zeroes read split partial ...passed 00:09:10.388 Test: blockdev reset ...[2024-06-10 09:47:04.025340] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:10.388 [2024-06-10 09:47:04.028766] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.388 passed 00:09:10.388 Test: blockdev write read 8 blocks ...passed 00:09:10.388 Test: blockdev write read size > 128k ...passed 00:09:10.388 Test: blockdev write read invalid size ...passed 00:09:10.388 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.388 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.388 Test: blockdev write read max offset ...passed 00:09:10.388 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.388 Test: blockdev writev readv 8 blocks ...passed 00:09:10.388 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.388 Test: blockdev writev readv block ...passed 00:09:10.388 Test: blockdev writev readv size > 128k ...passed 00:09:10.388 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.388 Test: blockdev comparev and writev ...passed 00:09:10.388 Test: blockdev nvme passthru rw ...passed 00:09:10.388 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.388 Test: blockdev nvme admin passthru ...passed 00:09:10.388 Test: blockdev copy ...[2024-06-10 09:47:04.036178] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:10.388 separate metadata which is not supported yet. 00:09:10.388 passed 00:09:10.388 00:09:10.388 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.388 suites 7 7 n/a 0 0 00:09:10.388 tests 161 161 161 0 0 00:09:10.388 asserts 1006 1006 1006 0 n/a 00:09:10.388 00:09:10.388 Elapsed time = 1.390 seconds 00:09:10.388 0 00:09:10.388 09:47:04 -- bdev/blockdev.sh@293 -- # killprocess 63041 00:09:10.388 09:47:04 -- common/autotest_common.sh@926 -- # '[' -z 63041 ']' 00:09:10.388 09:47:04 -- common/autotest_common.sh@930 -- # kill -0 63041 00:09:10.388 09:47:04 -- common/autotest_common.sh@931 -- # uname 00:09:10.388 09:47:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:10.388 09:47:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63041 00:09:10.388 09:47:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:10.388 09:47:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:10.388 09:47:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63041' 00:09:10.388 killing process with pid 63041 00:09:10.388 09:47:04 -- common/autotest_common.sh@945 -- # kill 63041 00:09:10.388 09:47:04 -- common/autotest_common.sh@950 -- # wait 63041 00:09:11.348 09:47:04 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:11.348 00:09:11.348 real 0m3.295s 00:09:11.348 user 0m8.728s 00:09:11.348 sys 0m0.365s 00:09:11.348 09:47:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.348 09:47:04 -- common/autotest_common.sh@10 -- # set +x 00:09:11.348 ************************************ 00:09:11.348 END TEST bdev_bounds 00:09:11.348 ************************************ 00:09:11.348 09:47:04 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:11.348 09:47:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:11.348 09:47:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.348 09:47:05 -- common/autotest_common.sh@10 -- # set +x 00:09:11.348 ************************************ 00:09:11.348 START TEST bdev_nbd 00:09:11.348 ************************************ 00:09:11.348 09:47:05 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:11.348 09:47:05 -- bdev/blockdev.sh@298 -- # uname -s 00:09:11.348 09:47:05 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:11.348 09:47:05 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.348 09:47:05 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:11.348 09:47:05 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:11.348 09:47:05 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:11.348 09:47:05 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:09:11.348 09:47:05 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:11.348 09:47:05 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:11.348 09:47:05 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:11.348 09:47:05 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:09:11.348 09:47:05 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:11.348 09:47:05 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:11.348 09:47:05 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:11.348 09:47:05 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:11.348 09:47:05 -- bdev/blockdev.sh@316 -- # nbd_pid=63109 00:09:11.348 09:47:05 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:11.348 09:47:05 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:11.349 09:47:05 -- bdev/blockdev.sh@318 -- # waitforlisten 63109 /var/tmp/spdk-nbd.sock 00:09:11.349 09:47:05 -- common/autotest_common.sh@819 -- # '[' -z 63109 ']' 00:09:11.349 09:47:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:11.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:11.349 09:47:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:11.349 09:47:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:11.349 09:47:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:11.349 09:47:05 -- common/autotest_common.sh@10 -- # set +x 00:09:11.349 [2024-06-10 09:47:05.098304] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:11.349 [2024-06-10 09:47:05.098468] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.607 [2024-06-10 09:47:05.262992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.864 [2024-06-10 09:47:05.430917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.236 09:47:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:13.236 09:47:06 -- common/autotest_common.sh@852 -- # return 0 00:09:13.236 09:47:06 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:13.236 09:47:06 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.236 09:47:06 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.236 09:47:06 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@24 -- # local i 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:13.237 09:47:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:13.237 09:47:06 -- common/autotest_common.sh@857 -- # local i 00:09:13.237 09:47:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:13.237 09:47:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:13.237 09:47:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:13.237 09:47:06 -- common/autotest_common.sh@861 -- # break 00:09:13.237 09:47:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:13.237 09:47:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:13.237 09:47:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.237 1+0 records in 00:09:13.237 1+0 records out 00:09:13.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473352 s, 8.7 MB/s 00:09:13.237 09:47:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.237 09:47:06 -- common/autotest_common.sh@874 -- # size=4096 00:09:13.237 09:47:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.237 09:47:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:13.237 09:47:06 -- common/autotest_common.sh@877 -- # return 0 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.237 09:47:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:13.495 09:47:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:13.495 09:47:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:13.495 09:47:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:13.495 09:47:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:13.495 09:47:07 -- common/autotest_common.sh@857 -- # local i 00:09:13.495 09:47:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:13.495 09:47:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:13.495 09:47:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:13.495 09:47:07 -- common/autotest_common.sh@861 -- # break 00:09:13.495 09:47:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:13.495 09:47:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:13.495 09:47:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.495 1+0 records in 00:09:13.495 1+0 records out 00:09:13.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583846 s, 7.0 MB/s 00:09:13.495 09:47:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.495 09:47:07 -- common/autotest_common.sh@874 -- # size=4096 00:09:13.495 09:47:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.495 09:47:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:13.495 09:47:07 -- common/autotest_common.sh@877 -- # return 0 00:09:13.495 09:47:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:13.495 09:47:07 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.495 09:47:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:13.754 09:47:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:13.754 09:47:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:13.754 09:47:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:13.754 09:47:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:09:13.754 09:47:07 -- common/autotest_common.sh@857 -- # local i 00:09:13.754 09:47:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:13.754 09:47:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:13.754 09:47:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:09:14.013 09:47:07 -- common/autotest_common.sh@861 -- # break 00:09:14.013 09:47:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:14.013 09:47:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:14.013 09:47:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.013 1+0 records in 00:09:14.013 1+0 records out 00:09:14.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454359 s, 9.0 MB/s 00:09:14.013 09:47:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.013 09:47:07 -- common/autotest_common.sh@874 -- # size=4096 00:09:14.013 09:47:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.013 09:47:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:14.013 09:47:07 -- common/autotest_common.sh@877 -- # return 0 00:09:14.013 09:47:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:14.013 09:47:07 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:14.013 09:47:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:14.271 09:47:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:14.271 09:47:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:14.271 09:47:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:14.271 09:47:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:09:14.271 09:47:07 -- common/autotest_common.sh@857 -- # local i 00:09:14.271 09:47:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:14.271 09:47:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:14.271 09:47:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:09:14.271 09:47:07 -- common/autotest_common.sh@861 -- # break 00:09:14.271 09:47:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:14.271 09:47:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:14.271 09:47:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.272 1+0 records in 00:09:14.272 1+0 records out 00:09:14.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705873 s, 5.8 MB/s 00:09:14.272 09:47:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.272 09:47:07 -- common/autotest_common.sh@874 -- # size=4096 00:09:14.272 09:47:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.272 09:47:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:14.272 09:47:07 -- common/autotest_common.sh@877 -- # return 0 00:09:14.272 09:47:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:14.272 09:47:07 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:14.272 09:47:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:14.530 09:47:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:14.530 09:47:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:14.530 09:47:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:14.530 09:47:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:09:14.530 09:47:08 -- common/autotest_common.sh@857 -- # local i 00:09:14.530 09:47:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:14.530 09:47:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:14.530 09:47:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:09:14.530 09:47:08 -- common/autotest_common.sh@861 -- # break 00:09:14.530 09:47:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:14.530 09:47:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:14.530 09:47:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.530 1+0 records in 00:09:14.530 1+0 records out 00:09:14.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726703 s, 5.6 MB/s 00:09:14.530 09:47:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.530 09:47:08 -- common/autotest_common.sh@874 -- # size=4096 00:09:14.530 09:47:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.530 09:47:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:14.530 09:47:08 -- common/autotest_common.sh@877 -- # return 0 00:09:14.530 09:47:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:14.530 09:47:08 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:14.530 09:47:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:14.788 09:47:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:14.788 09:47:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:14.788 09:47:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:14.788 09:47:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:09:14.788 09:47:08 -- common/autotest_common.sh@857 -- # local i 00:09:14.788 09:47:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:14.788 09:47:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:14.788 09:47:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:09:14.788 09:47:08 -- common/autotest_common.sh@861 -- # break 00:09:14.788 09:47:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:14.788 09:47:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:14.788 09:47:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.788 1+0 records in 00:09:14.788 1+0 records out 00:09:14.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584945 s, 7.0 MB/s 00:09:14.788 09:47:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.788 09:47:08 -- common/autotest_common.sh@874 -- # size=4096 00:09:14.788 09:47:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.788 09:47:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:14.788 09:47:08 -- common/autotest_common.sh@877 -- # return 0 00:09:14.788 09:47:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:14.788 09:47:08 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:14.788 09:47:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:15.047 09:47:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:15.047 09:47:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:15.047 09:47:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:15.047 09:47:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:09:15.047 09:47:08 -- common/autotest_common.sh@857 -- # local i 00:09:15.047 09:47:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:15.047 09:47:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:15.047 09:47:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:09:15.047 09:47:08 -- common/autotest_common.sh@861 -- # break 00:09:15.047 09:47:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:15.047 09:47:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:15.047 09:47:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.047 1+0 records in 00:09:15.047 1+0 records out 00:09:15.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705471 s, 5.8 MB/s 00:09:15.047 09:47:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.047 09:47:08 -- common/autotest_common.sh@874 -- # size=4096 00:09:15.047 09:47:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.047 09:47:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:15.047 09:47:08 -- common/autotest_common.sh@877 -- # return 0 00:09:15.047 09:47:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:15.047 09:47:08 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:15.047 09:47:08 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd0", 00:09:15.305 "bdev_name": "Nvme0n1p1" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd1", 00:09:15.305 "bdev_name": "Nvme0n1p2" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd2", 00:09:15.305 "bdev_name": "Nvme1n1" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd3", 00:09:15.305 "bdev_name": "Nvme2n1" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd4", 00:09:15.305 "bdev_name": "Nvme2n2" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd5", 00:09:15.305 "bdev_name": "Nvme2n3" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd6", 00:09:15.305 "bdev_name": "Nvme3n1" 00:09:15.305 } 00:09:15.305 ]' 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd0", 00:09:15.305 "bdev_name": "Nvme0n1p1" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd1", 00:09:15.305 "bdev_name": "Nvme0n1p2" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd2", 00:09:15.305 "bdev_name": "Nvme1n1" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd3", 00:09:15.305 "bdev_name": "Nvme2n1" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd4", 00:09:15.305 "bdev_name": "Nvme2n2" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd5", 00:09:15.305 "bdev_name": "Nvme2n3" 00:09:15.305 }, 00:09:15.305 { 00:09:15.305 "nbd_device": "/dev/nbd6", 00:09:15.305 "bdev_name": "Nvme3n1" 00:09:15.305 } 00:09:15.305 ]' 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@51 -- # local i 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.305 09:47:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:15.563 09:47:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.563 09:47:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.563 09:47:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.563 09:47:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.563 09:47:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.563 09:47:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.563 09:47:09 -- bdev/nbd_common.sh@41 -- # break 00:09:15.563 09:47:09 -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.564 09:47:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.564 09:47:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@41 -- # break 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.821 09:47:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@41 -- # break 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.080 09:47:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@41 -- # break 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.338 09:47:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@41 -- # break 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.597 09:47:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@41 -- # break 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@41 -- # break 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.856 09:47:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:17.114 09:47:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:17.114 09:47:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:17.114 09:47:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@65 -- # true 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@65 -- # count=0 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@122 -- # count=0 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@127 -- # return 0 00:09:17.373 09:47:10 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@12 -- # local i 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:17.373 09:47:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:17.631 /dev/nbd0 00:09:17.631 09:47:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:17.631 09:47:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:17.631 09:47:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:17.631 09:47:11 -- common/autotest_common.sh@857 -- # local i 00:09:17.631 09:47:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:17.631 09:47:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:17.631 09:47:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:17.631 09:47:11 -- common/autotest_common.sh@861 -- # break 00:09:17.631 09:47:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:17.631 09:47:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:17.632 09:47:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.632 1+0 records in 00:09:17.632 1+0 records out 00:09:17.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486297 s, 8.4 MB/s 00:09:17.632 09:47:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.632 09:47:11 -- common/autotest_common.sh@874 -- # size=4096 00:09:17.632 09:47:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.632 09:47:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:17.632 09:47:11 -- common/autotest_common.sh@877 -- # return 0 00:09:17.632 09:47:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.632 09:47:11 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:17.632 09:47:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:17.890 /dev/nbd1 00:09:17.890 09:47:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:17.890 09:47:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:17.890 09:47:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:17.890 09:47:11 -- common/autotest_common.sh@857 -- # local i 00:09:17.890 09:47:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:17.890 09:47:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:17.890 09:47:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:17.890 09:47:11 -- common/autotest_common.sh@861 -- # break 00:09:17.890 09:47:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:17.890 09:47:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:17.890 09:47:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.890 1+0 records in 00:09:17.890 1+0 records out 00:09:17.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432454 s, 9.5 MB/s 00:09:17.890 09:47:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.890 09:47:11 -- common/autotest_common.sh@874 -- # size=4096 00:09:17.890 09:47:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.890 09:47:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:17.890 09:47:11 -- common/autotest_common.sh@877 -- # return 0 00:09:17.890 09:47:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.890 09:47:11 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:17.890 09:47:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:18.149 /dev/nbd10 00:09:18.149 09:47:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:18.149 09:47:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:18.149 09:47:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:09:18.149 09:47:11 -- common/autotest_common.sh@857 -- # local i 00:09:18.149 09:47:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:18.149 09:47:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:18.149 09:47:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:09:18.149 09:47:11 -- common/autotest_common.sh@861 -- # break 00:09:18.149 09:47:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:18.149 09:47:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:18.149 09:47:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.149 1+0 records in 00:09:18.149 1+0 records out 00:09:18.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632755 s, 6.5 MB/s 00:09:18.149 09:47:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.149 09:47:11 -- common/autotest_common.sh@874 -- # size=4096 00:09:18.149 09:47:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.149 09:47:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:18.149 09:47:11 -- common/autotest_common.sh@877 -- # return 0 00:09:18.149 09:47:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.149 09:47:11 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:18.149 09:47:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:18.407 /dev/nbd11 00:09:18.407 09:47:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:18.407 09:47:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:18.407 09:47:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:09:18.407 09:47:12 -- common/autotest_common.sh@857 -- # local i 00:09:18.407 09:47:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:18.407 09:47:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:18.407 09:47:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:09:18.407 09:47:12 -- common/autotest_common.sh@861 -- # break 00:09:18.407 09:47:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:18.407 09:47:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:18.407 09:47:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.407 1+0 records in 00:09:18.407 1+0 records out 00:09:18.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674428 s, 6.1 MB/s 00:09:18.407 09:47:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.407 09:47:12 -- common/autotest_common.sh@874 -- # size=4096 00:09:18.407 09:47:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.407 09:47:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:18.407 09:47:12 -- common/autotest_common.sh@877 -- # return 0 00:09:18.407 09:47:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.407 09:47:12 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:18.407 09:47:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:18.666 /dev/nbd12 00:09:18.666 09:47:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:18.666 09:47:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:18.666 09:47:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:09:18.666 09:47:12 -- common/autotest_common.sh@857 -- # local i 00:09:18.666 09:47:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:18.666 09:47:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:18.666 09:47:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:09:18.666 09:47:12 -- common/autotest_common.sh@861 -- # break 00:09:18.666 09:47:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:18.666 09:47:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:18.666 09:47:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.666 1+0 records in 00:09:18.666 1+0 records out 00:09:18.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596886 s, 6.9 MB/s 00:09:18.666 09:47:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.666 09:47:12 -- common/autotest_common.sh@874 -- # size=4096 00:09:18.666 09:47:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.666 09:47:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:18.666 09:47:12 -- common/autotest_common.sh@877 -- # return 0 00:09:18.666 09:47:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.666 09:47:12 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:18.666 09:47:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:18.924 /dev/nbd13 00:09:18.924 09:47:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:18.924 09:47:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:18.924 09:47:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:09:18.924 09:47:12 -- common/autotest_common.sh@857 -- # local i 00:09:18.924 09:47:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:18.924 09:47:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:18.924 09:47:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:09:18.924 09:47:12 -- common/autotest_common.sh@861 -- # break 00:09:18.924 09:47:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:18.924 09:47:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:18.924 09:47:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.924 1+0 records in 00:09:18.924 1+0 records out 00:09:18.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627296 s, 6.5 MB/s 00:09:18.925 09:47:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.925 09:47:12 -- common/autotest_common.sh@874 -- # size=4096 00:09:18.925 09:47:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.925 09:47:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:18.925 09:47:12 -- common/autotest_common.sh@877 -- # return 0 00:09:18.925 09:47:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.925 09:47:12 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:18.925 09:47:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:19.183 /dev/nbd14 00:09:19.183 09:47:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:19.183 09:47:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:19.183 09:47:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:09:19.183 09:47:12 -- common/autotest_common.sh@857 -- # local i 00:09:19.183 09:47:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:19.183 09:47:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:19.183 09:47:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:09:19.183 09:47:12 -- common/autotest_common.sh@861 -- # break 00:09:19.183 09:47:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:19.183 09:47:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:19.183 09:47:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.183 1+0 records in 00:09:19.183 1+0 records out 00:09:19.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909547 s, 4.5 MB/s 00:09:19.183 09:47:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:19.183 09:47:12 -- common/autotest_common.sh@874 -- # size=4096 00:09:19.183 09:47:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:19.183 09:47:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:19.183 09:47:12 -- common/autotest_common.sh@877 -- # return 0 00:09:19.183 09:47:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:19.183 09:47:12 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:19.183 09:47:12 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:19.183 09:47:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.183 09:47:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:19.441 09:47:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd0", 00:09:19.441 "bdev_name": "Nvme0n1p1" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd1", 00:09:19.441 "bdev_name": "Nvme0n1p2" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd10", 00:09:19.441 "bdev_name": "Nvme1n1" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd11", 00:09:19.441 "bdev_name": "Nvme2n1" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd12", 00:09:19.441 "bdev_name": "Nvme2n2" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd13", 00:09:19.441 "bdev_name": "Nvme2n3" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd14", 00:09:19.441 "bdev_name": "Nvme3n1" 00:09:19.441 } 00:09:19.441 ]' 00:09:19.441 09:47:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd0", 00:09:19.441 "bdev_name": "Nvme0n1p1" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd1", 00:09:19.441 "bdev_name": "Nvme0n1p2" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd10", 00:09:19.441 "bdev_name": "Nvme1n1" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd11", 00:09:19.441 "bdev_name": "Nvme2n1" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd12", 00:09:19.441 "bdev_name": "Nvme2n2" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd13", 00:09:19.441 "bdev_name": "Nvme2n3" 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "nbd_device": "/dev/nbd14", 00:09:19.441 "bdev_name": "Nvme3n1" 00:09:19.441 } 00:09:19.441 ]' 00:09:19.441 09:47:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:19.441 09:47:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:19.441 /dev/nbd1 00:09:19.441 /dev/nbd10 00:09:19.441 /dev/nbd11 00:09:19.442 /dev/nbd12 00:09:19.442 /dev/nbd13 00:09:19.442 /dev/nbd14' 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:19.442 /dev/nbd1 00:09:19.442 /dev/nbd10 00:09:19.442 /dev/nbd11 00:09:19.442 /dev/nbd12 00:09:19.442 /dev/nbd13 00:09:19.442 /dev/nbd14' 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@65 -- # count=7 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@95 -- # count=7 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:19.442 256+0 records in 00:09:19.442 256+0 records out 00:09:19.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106032 s, 98.9 MB/s 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.442 09:47:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:19.700 256+0 records in 00:09:19.700 256+0 records out 00:09:19.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16693 s, 6.3 MB/s 00:09:19.700 09:47:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.700 09:47:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:19.959 256+0 records in 00:09:19.959 256+0 records out 00:09:19.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187098 s, 5.6 MB/s 00:09:19.959 09:47:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.959 09:47:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:19.959 256+0 records in 00:09:19.959 256+0 records out 00:09:19.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189647 s, 5.5 MB/s 00:09:19.959 09:47:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.959 09:47:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:20.217 256+0 records in 00:09:20.217 256+0 records out 00:09:20.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190081 s, 5.5 MB/s 00:09:20.217 09:47:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.217 09:47:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:20.475 256+0 records in 00:09:20.475 256+0 records out 00:09:20.475 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181753 s, 5.8 MB/s 00:09:20.475 09:47:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.475 09:47:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:20.734 256+0 records in 00:09:20.734 256+0 records out 00:09:20.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189095 s, 5.5 MB/s 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:20.734 256+0 records in 00:09:20.734 256+0 records out 00:09:20.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16436 s, 6.4 MB/s 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.734 09:47:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@51 -- # local i 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.992 09:47:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@41 -- # break 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.250 09:47:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@41 -- # break 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.508 09:47:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@41 -- # break 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.766 09:47:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@41 -- # break 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.024 09:47:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@41 -- # break 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.282 09:47:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@41 -- # break 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@41 -- # break 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.540 09:47:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:22.798 09:47:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:22.798 09:47:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:22.798 09:47:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@65 -- # true 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@65 -- # count=0 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@104 -- # count=0 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@109 -- # return 0 00:09:23.056 09:47:16 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:23.056 09:47:16 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:23.318 malloc_lvol_verify 00:09:23.318 09:47:16 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:23.575 addc3c83-4128-4066-bb8d-72fdaddc0bd0 00:09:23.576 09:47:17 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:23.834 63a0a631-e935-4528-967e-0ab29a6bf719 00:09:23.834 09:47:17 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:24.092 /dev/nbd0 00:09:24.092 09:47:17 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:24.092 mke2fs 1.46.5 (30-Dec-2021) 00:09:24.092 Discarding device blocks: 0/4096 done 00:09:24.092 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:24.092 00:09:24.092 Allocating group tables: 0/1 done 00:09:24.092 Writing inode tables: 0/1 done 00:09:24.092 Creating journal (1024 blocks): done 00:09:24.092 Writing superblocks and filesystem accounting information: 0/1 done 00:09:24.092 00:09:24.092 09:47:17 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:24.092 09:47:17 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:24.092 09:47:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.092 09:47:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:24.092 09:47:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:24.092 09:47:17 -- bdev/nbd_common.sh@51 -- # local i 00:09:24.092 09:47:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.092 09:47:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@41 -- # break 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:24.350 09:47:17 -- bdev/nbd_common.sh@147 -- # return 0 00:09:24.350 09:47:17 -- bdev/blockdev.sh@324 -- # killprocess 63109 00:09:24.350 09:47:17 -- common/autotest_common.sh@926 -- # '[' -z 63109 ']' 00:09:24.350 09:47:17 -- common/autotest_common.sh@930 -- # kill -0 63109 00:09:24.350 09:47:17 -- common/autotest_common.sh@931 -- # uname 00:09:24.350 09:47:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:24.350 09:47:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63109 00:09:24.350 killing process with pid 63109 00:09:24.350 09:47:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:24.350 09:47:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:24.350 09:47:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63109' 00:09:24.350 09:47:17 -- common/autotest_common.sh@945 -- # kill 63109 00:09:24.350 09:47:17 -- common/autotest_common.sh@950 -- # wait 63109 00:09:25.284 ************************************ 00:09:25.284 END TEST bdev_nbd 00:09:25.284 ************************************ 00:09:25.284 09:47:18 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:25.284 00:09:25.284 real 0m13.953s 00:09:25.284 user 0m19.627s 00:09:25.284 sys 0m4.218s 00:09:25.284 09:47:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.284 09:47:18 -- common/autotest_common.sh@10 -- # set +x 00:09:25.284 09:47:19 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:25.284 09:47:19 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:09:25.284 09:47:19 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:09:25.284 skipping fio tests on NVMe due to multi-ns failures. 00:09:25.284 09:47:19 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:25.284 09:47:19 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:25.284 09:47:19 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:25.284 09:47:19 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:25.285 09:47:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:25.285 09:47:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.285 ************************************ 00:09:25.285 START TEST bdev_verify 00:09:25.285 ************************************ 00:09:25.285 09:47:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:25.542 [2024-06-10 09:47:19.107070] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:25.542 [2024-06-10 09:47:19.107293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63556 ] 00:09:25.542 [2024-06-10 09:47:19.281040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.800 [2024-06-10 09:47:19.492767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.800 [2024-06-10 09:47:19.492783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.366 Running I/O for 5 seconds... 00:09:31.637 00:09:31.637 Latency(us) 00:09:31.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.637 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x0 length 0x5e800 00:09:31.637 Nvme0n1p1 : 5.05 2371.01 9.26 0.00 0.00 53822.48 7804.74 57433.37 00:09:31.637 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x5e800 length 0x5e800 00:09:31.637 Nvme0n1p1 : 5.05 2387.06 9.32 0.00 0.00 53472.42 7060.01 56480.12 00:09:31.637 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x0 length 0x5e7ff 00:09:31.637 Nvme0n1p2 : 5.05 2370.12 9.26 0.00 0.00 53796.33 8340.95 56480.12 00:09:31.637 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:31.637 Nvme0n1p2 : 5.05 2385.91 9.32 0.00 0.00 53396.00 8162.21 50283.99 00:09:31.637 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x0 length 0xa0000 00:09:31.637 Nvme1n1 : 5.05 2369.23 9.25 0.00 0.00 53760.52 9413.35 54573.61 00:09:31.637 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0xa0000 length 0xa0000 00:09:31.637 Nvme1n1 : 5.06 2390.49 9.34 0.00 0.00 53275.53 4289.63 45756.04 00:09:31.637 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x0 length 0x80000 00:09:31.637 Nvme2n1 : 5.06 2374.65 9.28 0.00 0.00 53577.98 4110.89 49807.36 00:09:31.637 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x80000 length 0x80000 00:09:31.637 Nvme2n1 : 5.06 2389.29 9.33 0.00 0.00 53252.10 5928.03 45994.36 00:09:31.637 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x0 length 0x80000 00:09:31.637 Nvme2n2 : 5.06 2373.98 9.27 0.00 0.00 53537.26 4498.15 50283.99 00:09:31.637 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x80000 length 0x80000 00:09:31.637 Nvme2n2 : 5.07 2388.00 9.33 0.00 0.00 53229.42 7536.64 43849.54 00:09:31.637 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x0 length 0x80000 00:09:31.637 Nvme2n3 : 5.06 2372.81 9.27 0.00 0.00 53510.16 5630.14 50998.92 00:09:31.637 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x80000 length 0x80000 00:09:31.637 Nvme2n3 : 5.07 2386.81 9.32 0.00 0.00 53204.11 9055.88 43849.54 00:09:31.637 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x0 length 0x20000 00:09:31.637 Nvme3n1 : 5.07 2371.55 9.26 0.00 0.00 53478.49 7149.38 48615.80 00:09:31.637 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.637 Verification LBA range: start 0x20000 length 0x20000 00:09:31.637 Nvme3n1 : 5.07 2386.20 9.32 0.00 0.00 53167.50 9592.09 43611.23 00:09:31.637 =================================================================================================================== 00:09:31.637 Total : 33317.09 130.14 0.00 0.00 53462.03 4110.89 57433.37 00:09:33.537 00:09:33.537 real 0m7.831s 00:09:33.537 user 0m14.366s 00:09:33.537 sys 0m0.262s 00:09:33.537 09:47:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.537 09:47:26 -- common/autotest_common.sh@10 -- # set +x 00:09:33.537 ************************************ 00:09:33.537 END TEST bdev_verify 00:09:33.537 ************************************ 00:09:33.537 09:47:26 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:33.537 09:47:26 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:33.537 09:47:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:33.537 09:47:26 -- common/autotest_common.sh@10 -- # set +x 00:09:33.537 ************************************ 00:09:33.537 START TEST bdev_verify_big_io 00:09:33.537 ************************************ 00:09:33.537 09:47:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:33.537 [2024-06-10 09:47:26.996681] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:33.537 [2024-06-10 09:47:26.996859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63660 ] 00:09:33.537 [2024-06-10 09:47:27.164974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.795 [2024-06-10 09:47:27.332266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.795 [2024-06-10 09:47:27.332274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.361 Running I/O for 5 seconds... 00:09:40.927 00:09:40.927 Latency(us) 00:09:40.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.927 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0x0 length 0x5e80 00:09:40.927 Nvme0n1p1 : 5.45 207.19 12.95 0.00 0.00 605158.92 25856.93 766413.73 00:09:40.927 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0x5e80 length 0x5e80 00:09:40.927 Nvme0n1p1 : 5.43 238.78 14.92 0.00 0.00 524698.94 17635.14 701592.67 00:09:40.927 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0x0 length 0x5e7f 00:09:40.927 Nvme0n1p2 : 5.45 207.06 12.94 0.00 0.00 596805.83 27644.28 705405.67 00:09:40.927 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:40.927 Nvme0n1p2 : 5.43 238.70 14.92 0.00 0.00 518095.36 17754.30 652023.62 00:09:40.927 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0x0 length 0xa000 00:09:40.927 Nvme1n1 : 5.47 214.75 13.42 0.00 0.00 572988.07 14715.81 652023.62 00:09:40.927 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0xa000 length 0xa000 00:09:40.927 Nvme1n1 : 5.44 247.80 15.49 0.00 0.00 498837.86 7447.27 602454.57 00:09:40.927 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0x0 length 0x8000 00:09:40.927 Nvme2n1 : 5.47 214.66 13.42 0.00 0.00 565077.45 15371.17 625332.60 00:09:40.927 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0x8000 length 0x8000 00:09:40.927 Nvme2n1 : 5.45 247.70 15.48 0.00 0.00 492678.37 8340.95 594828.57 00:09:40.927 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0x0 length 0x8000 00:09:40.927 Nvme2n2 : 5.48 214.58 13.41 0.00 0.00 556903.23 15966.95 648210.62 00:09:40.927 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.927 Verification LBA range: start 0x8000 length 0x8000 00:09:40.928 Nvme2n2 : 5.45 247.59 15.47 0.00 0.00 486378.32 9175.04 598641.57 00:09:40.928 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.928 Verification LBA range: start 0x0 length 0x8000 00:09:40.928 Nvme2n3 : 5.48 221.50 13.84 0.00 0.00 532997.73 2383.13 915120.87 00:09:40.928 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.928 Verification LBA range: start 0x8000 length 0x8000 00:09:40.928 Nvme2n3 : 5.45 247.42 15.46 0.00 0.00 480097.09 10307.03 602454.57 00:09:40.928 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.928 Verification LBA range: start 0x0 length 0x2000 00:09:40.928 Nvme3n1 : 5.49 228.44 14.28 0.00 0.00 509993.47 2144.81 934185.89 00:09:40.928 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.928 Verification LBA range: start 0x2000 length 0x2000 00:09:40.928 Nvme3n1 : 5.46 254.90 15.93 0.00 0.00 460622.79 927.19 606267.58 00:09:40.928 =================================================================================================================== 00:09:40.928 Total : 3231.07 201.94 0.00 0.00 525735.85 927.19 934185.89 00:09:41.507 00:09:41.507 real 0m8.357s 00:09:41.507 user 0m15.444s 00:09:41.507 sys 0m0.289s 00:09:41.507 ************************************ 00:09:41.507 END TEST bdev_verify_big_io 00:09:41.507 ************************************ 00:09:41.507 09:47:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.507 09:47:35 -- common/autotest_common.sh@10 -- # set +x 00:09:41.767 09:47:35 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:41.767 09:47:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:41.767 09:47:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.767 09:47:35 -- common/autotest_common.sh@10 -- # set +x 00:09:41.767 ************************************ 00:09:41.767 START TEST bdev_write_zeroes 00:09:41.767 ************************************ 00:09:41.767 09:47:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:41.767 [2024-06-10 09:47:35.401128] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:41.767 [2024-06-10 09:47:35.401303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63770 ] 00:09:42.026 [2024-06-10 09:47:35.569579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.026 [2024-06-10 09:47:35.737901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.594 Running I/O for 1 seconds... 00:09:43.968 00:09:43.968 Latency(us) 00:09:43.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.968 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:43.968 Nvme0n1p1 : 1.02 7047.09 27.53 0.00 0.00 18075.62 13405.09 28478.37 00:09:43.968 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:43.968 Nvme0n1p2 : 1.02 7035.33 27.48 0.00 0.00 18071.47 13643.40 29312.47 00:09:43.968 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:43.968 Nvme1n1 : 1.02 7065.87 27.60 0.00 0.00 17978.85 11915.64 25737.77 00:09:43.968 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:43.968 Nvme2n1 : 1.03 7054.88 27.56 0.00 0.00 17916.41 12034.79 23950.43 00:09:43.968 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:43.968 Nvme2n2 : 1.03 7044.41 27.52 0.00 0.00 17891.74 12153.95 23592.96 00:09:43.968 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:43.968 Nvme2n3 : 1.03 7033.93 27.48 0.00 0.00 17871.64 9651.67 23831.27 00:09:43.968 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:43.968 Nvme3n1 : 1.03 7081.97 27.66 0.00 0.00 17777.10 7417.48 23592.96 00:09:43.968 =================================================================================================================== 00:09:43.968 Total : 49363.49 192.83 0.00 0.00 17939.86 7417.48 29312.47 00:09:44.905 00:09:44.905 real 0m3.206s 00:09:44.905 user 0m2.876s 00:09:44.905 sys 0m0.209s 00:09:44.905 09:47:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.905 ************************************ 00:09:44.905 END TEST bdev_write_zeroes 00:09:44.905 ************************************ 00:09:44.905 09:47:38 -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 09:47:38 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:44.905 09:47:38 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:44.905 09:47:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.905 09:47:38 -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 ************************************ 00:09:44.905 START TEST bdev_json_nonenclosed 00:09:44.905 ************************************ 00:09:44.905 09:47:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:44.905 [2024-06-10 09:47:38.663285] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:44.905 [2024-06-10 09:47:38.663427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63823 ] 00:09:45.164 [2024-06-10 09:47:38.822659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.423 [2024-06-10 09:47:38.994330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.423 [2024-06-10 09:47:38.994519] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:45.423 [2024-06-10 09:47:38.994562] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:45.683 00:09:45.683 real 0m0.811s 00:09:45.683 user 0m0.575s 00:09:45.683 sys 0m0.131s 00:09:45.683 09:47:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.683 09:47:39 -- common/autotest_common.sh@10 -- # set +x 00:09:45.683 ************************************ 00:09:45.683 END TEST bdev_json_nonenclosed 00:09:45.683 ************************************ 00:09:45.683 09:47:39 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:45.683 09:47:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:45.683 09:47:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:45.683 09:47:39 -- common/autotest_common.sh@10 -- # set +x 00:09:45.683 ************************************ 00:09:45.683 START TEST bdev_json_nonarray 00:09:45.683 ************************************ 00:09:45.683 09:47:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:45.942 [2024-06-10 09:47:39.512758] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:45.942 [2024-06-10 09:47:39.512901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63854 ] 00:09:45.942 [2024-06-10 09:47:39.672167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.201 [2024-06-10 09:47:39.839115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.201 [2024-06-10 09:47:39.839363] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:46.201 [2024-06-10 09:47:39.839394] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:46.460 00:09:46.460 real 0m0.767s 00:09:46.460 user 0m0.550s 00:09:46.460 sys 0m0.111s 00:09:46.460 09:47:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.460 ************************************ 00:09:46.460 END TEST bdev_json_nonarray 00:09:46.460 ************************************ 00:09:46.460 09:47:40 -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 09:47:40 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:46.719 09:47:40 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:46.719 09:47:40 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:46.719 09:47:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:46.719 09:47:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:46.719 09:47:40 -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 ************************************ 00:09:46.719 START TEST bdev_gpt_uuid 00:09:46.719 ************************************ 00:09:46.719 09:47:40 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:09:46.719 09:47:40 -- bdev/blockdev.sh@612 -- # local bdev 00:09:46.719 09:47:40 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:46.719 09:47:40 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=63885 00:09:46.719 09:47:40 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:46.719 09:47:40 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:46.719 09:47:40 -- bdev/blockdev.sh@47 -- # waitforlisten 63885 00:09:46.719 09:47:40 -- common/autotest_common.sh@819 -- # '[' -z 63885 ']' 00:09:46.719 09:47:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.719 09:47:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:46.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.719 09:47:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.719 09:47:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:46.719 09:47:40 -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 [2024-06-10 09:47:40.372654] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:46.719 [2024-06-10 09:47:40.372817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63885 ] 00:09:46.978 [2024-06-10 09:47:40.533845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.978 [2024-06-10 09:47:40.700762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:46.978 [2024-06-10 09:47:40.700993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.356 09:47:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:48.356 09:47:42 -- common/autotest_common.sh@852 -- # return 0 00:09:48.356 09:47:42 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:48.356 09:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:48.356 09:47:42 -- common/autotest_common.sh@10 -- # set +x 00:09:48.615 Some configs were skipped because the RPC state that can call them passed over. 00:09:48.615 09:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:48.615 09:47:42 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:48.615 09:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:48.615 09:47:42 -- common/autotest_common.sh@10 -- # set +x 00:09:48.615 09:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:48.615 09:47:42 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:48.615 09:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:48.615 09:47:42 -- common/autotest_common.sh@10 -- # set +x 00:09:48.615 09:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:48.615 09:47:42 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:48.615 { 00:09:48.615 "name": "Nvme0n1p1", 00:09:48.615 "aliases": [ 00:09:48.615 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:48.615 ], 00:09:48.615 "product_name": "GPT Disk", 00:09:48.615 "block_size": 4096, 00:09:48.616 "num_blocks": 774144, 00:09:48.616 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:48.616 "md_size": 64, 00:09:48.616 "md_interleave": false, 00:09:48.616 "dif_type": 0, 00:09:48.616 "assigned_rate_limits": { 00:09:48.616 "rw_ios_per_sec": 0, 00:09:48.616 "rw_mbytes_per_sec": 0, 00:09:48.616 "r_mbytes_per_sec": 0, 00:09:48.616 "w_mbytes_per_sec": 0 00:09:48.616 }, 00:09:48.616 "claimed": false, 00:09:48.616 "zoned": false, 00:09:48.616 "supported_io_types": { 00:09:48.616 "read": true, 00:09:48.616 "write": true, 00:09:48.616 "unmap": true, 00:09:48.616 "write_zeroes": true, 00:09:48.616 "flush": true, 00:09:48.616 "reset": true, 00:09:48.616 "compare": true, 00:09:48.616 "compare_and_write": false, 00:09:48.616 "abort": true, 00:09:48.616 "nvme_admin": false, 00:09:48.616 "nvme_io": false 00:09:48.616 }, 00:09:48.616 "driver_specific": { 00:09:48.616 "gpt": { 00:09:48.616 "base_bdev": "Nvme0n1", 00:09:48.616 "offset_blocks": 256, 00:09:48.616 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:48.616 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:48.616 "partition_name": "SPDK_TEST_first" 00:09:48.616 } 00:09:48.616 } 00:09:48.616 } 00:09:48.616 ]' 00:09:48.616 09:47:42 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:48.875 09:47:42 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:48.875 09:47:42 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:48.875 09:47:42 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:48.875 09:47:42 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:48.875 09:47:42 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:48.875 09:47:42 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:48.875 09:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:48.875 09:47:42 -- common/autotest_common.sh@10 -- # set +x 00:09:48.875 09:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:48.875 09:47:42 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:48.875 { 00:09:48.875 "name": "Nvme0n1p2", 00:09:48.875 "aliases": [ 00:09:48.875 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:48.875 ], 00:09:48.875 "product_name": "GPT Disk", 00:09:48.875 "block_size": 4096, 00:09:48.875 "num_blocks": 774143, 00:09:48.875 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:48.875 "md_size": 64, 00:09:48.875 "md_interleave": false, 00:09:48.875 "dif_type": 0, 00:09:48.875 "assigned_rate_limits": { 00:09:48.875 "rw_ios_per_sec": 0, 00:09:48.875 "rw_mbytes_per_sec": 0, 00:09:48.875 "r_mbytes_per_sec": 0, 00:09:48.875 "w_mbytes_per_sec": 0 00:09:48.875 }, 00:09:48.875 "claimed": false, 00:09:48.875 "zoned": false, 00:09:48.875 "supported_io_types": { 00:09:48.875 "read": true, 00:09:48.875 "write": true, 00:09:48.875 "unmap": true, 00:09:48.875 "write_zeroes": true, 00:09:48.875 "flush": true, 00:09:48.875 "reset": true, 00:09:48.875 "compare": true, 00:09:48.875 "compare_and_write": false, 00:09:48.875 "abort": true, 00:09:48.875 "nvme_admin": false, 00:09:48.875 "nvme_io": false 00:09:48.875 }, 00:09:48.875 "driver_specific": { 00:09:48.875 "gpt": { 00:09:48.875 "base_bdev": "Nvme0n1", 00:09:48.875 "offset_blocks": 774400, 00:09:48.875 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:48.875 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:48.875 "partition_name": "SPDK_TEST_second" 00:09:48.875 } 00:09:48.875 } 00:09:48.875 } 00:09:48.875 ]' 00:09:48.875 09:47:42 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:48.875 09:47:42 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:48.875 09:47:42 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:48.875 09:47:42 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:48.875 09:47:42 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:49.134 09:47:42 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:49.134 09:47:42 -- bdev/blockdev.sh@629 -- # killprocess 63885 00:09:49.134 09:47:42 -- common/autotest_common.sh@926 -- # '[' -z 63885 ']' 00:09:49.134 09:47:42 -- common/autotest_common.sh@930 -- # kill -0 63885 00:09:49.134 09:47:42 -- common/autotest_common.sh@931 -- # uname 00:09:49.134 09:47:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:49.134 09:47:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63885 00:09:49.134 09:47:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:49.134 killing process with pid 63885 00:09:49.134 09:47:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:49.134 09:47:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63885' 00:09:49.134 09:47:42 -- common/autotest_common.sh@945 -- # kill 63885 00:09:49.134 09:47:42 -- common/autotest_common.sh@950 -- # wait 63885 00:09:51.040 00:09:51.040 real 0m4.324s 00:09:51.040 user 0m4.842s 00:09:51.040 sys 0m0.452s 00:09:51.040 09:47:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.040 ************************************ 00:09:51.040 END TEST bdev_gpt_uuid 00:09:51.040 ************************************ 00:09:51.040 09:47:44 -- common/autotest_common.sh@10 -- # set +x 00:09:51.040 09:47:44 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:51.040 09:47:44 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:51.040 09:47:44 -- bdev/blockdev.sh@809 -- # cleanup 00:09:51.040 09:47:44 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:51.040 09:47:44 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:51.040 09:47:44 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:51.040 09:47:44 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:51.040 09:47:44 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:51.040 09:47:44 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:51.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.557 Waiting for block devices as requested 00:09:51.557 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.557 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.816 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.816 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.096 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:57.096 09:47:50 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:57.096 09:47:50 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:57.096 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:57.096 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:57.096 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:57.096 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:57.096 09:47:50 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:57.096 00:09:57.096 real 1m4.828s 00:09:57.096 user 1m23.935s 00:09:57.096 sys 0m9.411s 00:09:57.096 ************************************ 00:09:57.096 END TEST blockdev_nvme_gpt 00:09:57.096 ************************************ 00:09:57.096 09:47:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.096 09:47:50 -- common/autotest_common.sh@10 -- # set +x 00:09:57.096 09:47:50 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:57.096 09:47:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:57.096 09:47:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:57.096 09:47:50 -- common/autotest_common.sh@10 -- # set +x 00:09:57.096 ************************************ 00:09:57.096 START TEST nvme 00:09:57.096 ************************************ 00:09:57.096 09:47:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:57.354 * Looking for test storage... 00:09:57.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:57.354 09:47:50 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:57.976 lsblk: /dev/nvme0c0n1: not a block device 00:09:58.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.494 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.494 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.494 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.495 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.495 09:47:52 -- nvme/nvme.sh@79 -- # uname 00:09:58.495 09:47:52 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:58.495 09:47:52 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:58.495 09:47:52 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:58.495 09:47:52 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:58.495 09:47:52 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:09:58.495 09:47:52 -- common/autotest_common.sh@1045 -- # echo 0 00:09:58.495 09:47:52 -- common/autotest_common.sh@1047 -- # stubpid=64583 00:09:58.495 09:47:52 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:58.495 09:47:52 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:09:58.495 Waiting for stub to ready for secondary processes... 00:09:58.495 09:47:52 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:58.495 09:47:52 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64583 ]] 00:09:58.495 09:47:52 -- common/autotest_common.sh@1052 -- # sleep 1s 00:09:58.754 [2024-06-10 09:47:52.292324] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:58.754 [2024-06-10 09:47:52.292473] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.322 [2024-06-10 09:47:53.082547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.581 09:47:53 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:59.581 09:47:53 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64583 ]] 00:09:59.581 09:47:53 -- common/autotest_common.sh@1052 -- # sleep 1s 00:09:59.581 [2024-06-10 09:47:53.291951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.581 [2024-06-10 09:47:53.292049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.581 [2024-06-10 09:47:53.292052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.581 [2024-06-10 09:47:53.315986] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.581 [2024-06-10 09:47:53.329177] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:59.581 [2024-06-10 09:47:53.329429] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:59.581 [2024-06-10 09:47:53.340894] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.581 [2024-06-10 09:47:53.341083] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:59.581 [2024-06-10 09:47:53.341260] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:59.840 [2024-06-10 09:47:53.350483] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.840 [2024-06-10 09:47:53.350683] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:59.840 [2024-06-10 09:47:53.350841] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:59.840 [2024-06-10 09:47:53.360414] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.840 [2024-06-10 09:47:53.360633] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:59.840 [2024-06-10 09:47:53.360804] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:59.840 [2024-06-10 09:47:53.360961] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:59.840 [2024-06-10 09:47:53.361246] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:00.777 09:47:54 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:00.777 done. 00:10:00.777 09:47:54 -- common/autotest_common.sh@1054 -- # echo done. 00:10:00.777 09:47:54 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:00.777 09:47:54 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:00.777 09:47:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:00.778 09:47:54 -- common/autotest_common.sh@10 -- # set +x 00:10:00.778 ************************************ 00:10:00.778 START TEST nvme_reset 00:10:00.778 ************************************ 00:10:00.778 09:47:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:00.778 Initializing NVMe Controllers 00:10:00.778 Skipping QEMU NVMe SSD at 0000:00:06.0 00:10:00.778 Skipping QEMU NVMe SSD at 0000:00:07.0 00:10:00.778 Skipping QEMU NVMe SSD at 0000:00:09.0 00:10:00.778 Skipping QEMU NVMe SSD at 0000:00:08.0 00:10:00.778 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:00.778 00:10:00.778 real 0m0.281s 00:10:00.778 user 0m0.106s 00:10:00.778 sys 0m0.131s 00:10:00.778 09:47:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.036 ************************************ 00:10:01.036 END TEST nvme_reset 00:10:01.036 ************************************ 00:10:01.036 09:47:54 -- common/autotest_common.sh@10 -- # set +x 00:10:01.036 09:47:54 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:01.036 09:47:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:01.036 09:47:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.036 09:47:54 -- common/autotest_common.sh@10 -- # set +x 00:10:01.036 ************************************ 00:10:01.036 START TEST nvme_identify 00:10:01.036 ************************************ 00:10:01.036 09:47:54 -- common/autotest_common.sh@1104 -- # nvme_identify 00:10:01.036 09:47:54 -- nvme/nvme.sh@12 -- # bdfs=() 00:10:01.036 09:47:54 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:01.036 09:47:54 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:01.036 09:47:54 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:01.036 09:47:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:01.036 09:47:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:01.036 09:47:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:01.036 09:47:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:01.036 09:47:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:01.036 09:47:54 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:01.036 09:47:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:01.036 09:47:54 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:01.298 [2024-06-10 09:47:54.886346] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 64621 terminated unexpected 00:10:01.298 ===================================================== 00:10:01.298 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:01.298 ===================================================== 00:10:01.298 Controller Capabilities/Features 00:10:01.298 ================================ 00:10:01.298 Vendor ID: 1b36 00:10:01.298 Subsystem Vendor ID: 1af4 00:10:01.298 Serial Number: 12340 00:10:01.298 Model Number: QEMU NVMe Ctrl 00:10:01.298 Firmware Version: 8.0.0 00:10:01.298 Recommended Arb Burst: 6 00:10:01.298 IEEE OUI Identifier: 00 54 52 00:10:01.298 Multi-path I/O 00:10:01.298 May have multiple subsystem ports: No 00:10:01.298 May have multiple controllers: No 00:10:01.298 Associated with SR-IOV VF: No 00:10:01.298 Max Data Transfer Size: 524288 00:10:01.298 Max Number of Namespaces: 256 00:10:01.298 Max Number of I/O Queues: 64 00:10:01.298 NVMe Specification Version (VS): 1.4 00:10:01.298 NVMe Specification Version (Identify): 1.4 00:10:01.298 Maximum Queue Entries: 2048 00:10:01.298 Contiguous Queues Required: Yes 00:10:01.298 Arbitration Mechanisms Supported 00:10:01.298 Weighted Round Robin: Not Supported 00:10:01.298 Vendor Specific: Not Supported 00:10:01.298 Reset Timeout: 7500 ms 00:10:01.298 Doorbell Stride: 4 bytes 00:10:01.298 NVM Subsystem Reset: Not Supported 00:10:01.298 Command Sets Supported 00:10:01.298 NVM Command Set: Supported 00:10:01.298 Boot Partition: Not Supported 00:10:01.298 Memory Page Size Minimum: 4096 bytes 00:10:01.298 Memory Page Size Maximum: 65536 bytes 00:10:01.298 Persistent Memory Region: Not Supported 00:10:01.298 Optional Asynchronous Events Supported 00:10:01.298 Namespace Attribute Notices: Supported 00:10:01.298 Firmware Activation Notices: Not Supported 00:10:01.298 ANA Change Notices: Not Supported 00:10:01.298 PLE Aggregate Log Change Notices: Not Supported 00:10:01.298 LBA Status Info Alert Notices: Not Supported 00:10:01.298 EGE Aggregate Log Change Notices: Not Supported 00:10:01.298 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.298 Zone Descriptor Change Notices: Not Supported 00:10:01.298 Discovery Log Change Notices: Not Supported 00:10:01.298 Controller Attributes 00:10:01.298 128-bit Host Identifier: Not Supported 00:10:01.298 Non-Operational Permissive Mode: Not Supported 00:10:01.298 NVM Sets: Not Supported 00:10:01.298 Read Recovery Levels: Not Supported 00:10:01.298 Endurance Groups: Not Supported 00:10:01.298 Predictable Latency Mode: Not Supported 00:10:01.298 Traffic Based Keep ALive: Not Supported 00:10:01.298 Namespace Granularity: Not Supported 00:10:01.298 SQ Associations: Not Supported 00:10:01.298 UUID List: Not Supported 00:10:01.298 Multi-Domain Subsystem: Not Supported 00:10:01.298 Fixed Capacity Management: Not Supported 00:10:01.298 Variable Capacity Management: Not Supported 00:10:01.298 Delete Endurance Group: Not Supported 00:10:01.298 Delete NVM Set: Not Supported 00:10:01.298 Extended LBA Formats Supported: Supported 00:10:01.298 Flexible Data Placement Supported: Not Supported 00:10:01.298 00:10:01.298 Controller Memory Buffer Support 00:10:01.298 ================================ 00:10:01.298 Supported: No 00:10:01.298 00:10:01.298 Persistent Memory Region Support 00:10:01.298 ================================ 00:10:01.298 Supported: No 00:10:01.298 00:10:01.298 Admin Command Set Attributes 00:10:01.298 ============================ 00:10:01.298 Security Send/Receive: Not Supported 00:10:01.298 Format NVM: Supported 00:10:01.298 Firmware Activate/Download: Not Supported 00:10:01.298 Namespace Management: Supported 00:10:01.298 Device Self-Test: Not Supported 00:10:01.298 Directives: Supported 00:10:01.298 NVMe-MI: Not Supported 00:10:01.298 Virtualization Management: Not Supported 00:10:01.298 Doorbell Buffer Config: Supported 00:10:01.298 Get LBA Status Capability: Not Supported 00:10:01.298 Command & Feature Lockdown Capability: Not Supported 00:10:01.298 Abort Command Limit: 4 00:10:01.298 Async Event Request Limit: 4 00:10:01.298 Number of Firmware Slots: N/A 00:10:01.298 Firmware Slot 1 Read-Only: N/A 00:10:01.298 Firmware Activation Without Reset: N/A 00:10:01.298 Multiple Update Detection Support: N/A 00:10:01.298 Firmware Update Granularity: No Information Provided 00:10:01.298 Per-Namespace SMART Log: Yes 00:10:01.298 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.298 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:01.298 Command Effects Log Page: Supported 00:10:01.298 Get Log Page Extended Data: Supported 00:10:01.298 Telemetry Log Pages: Not Supported 00:10:01.298 Persistent Event Log Pages: Not Supported 00:10:01.298 Supported Log Pages Log Page: May Support 00:10:01.298 Commands Supported & Effects Log Page: Not Supported 00:10:01.298 Feature Identifiers & Effects Log Page:May Support 00:10:01.298 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.298 Data Area 4 for Telemetry Log: Not Supported 00:10:01.298 Error Log Page Entries Supported: 1 00:10:01.298 Keep Alive: Not Supported 00:10:01.298 00:10:01.298 NVM Command Set Attributes 00:10:01.298 ========================== 00:10:01.298 Submission Queue Entry Size 00:10:01.298 Max: 64 00:10:01.298 Min: 64 00:10:01.298 Completion Queue Entry Size 00:10:01.298 Max: 16 00:10:01.298 Min: 16 00:10:01.298 Number of Namespaces: 256 00:10:01.298 Compare Command: Supported 00:10:01.298 Write Uncorrectable Command: Not Supported 00:10:01.298 Dataset Management Command: Supported 00:10:01.298 Write Zeroes Command: Supported 00:10:01.298 Set Features Save Field: Supported 00:10:01.298 Reservations: Not Supported 00:10:01.298 Timestamp: Supported 00:10:01.298 Copy: Supported 00:10:01.298 Volatile Write Cache: Present 00:10:01.298 Atomic Write Unit (Normal): 1 00:10:01.298 Atomic Write Unit (PFail): 1 00:10:01.298 Atomic Compare & Write Unit: 1 00:10:01.298 Fused Compare & Write: Not Supported 00:10:01.299 Scatter-Gather List 00:10:01.299 SGL Command Set: Supported 00:10:01.299 SGL Keyed: Not Supported 00:10:01.299 SGL Bit Bucket Descriptor: Not Supported 00:10:01.299 SGL Metadata Pointer: Not Supported 00:10:01.299 Oversized SGL: Not Supported 00:10:01.299 SGL Metadata Address: Not Supported 00:10:01.299 SGL Offset: Not Supported 00:10:01.299 Transport SGL Data Block: Not Supported 00:10:01.299 Replay Protected Memory Block: Not Supported 00:10:01.299 00:10:01.299 Firmware Slot Information 00:10:01.299 ========================= 00:10:01.299 Active slot: 1 00:10:01.299 Slot 1 Firmware Revision: 1.0 00:10:01.299 00:10:01.299 00:10:01.299 Commands Supported and Effects 00:10:01.299 ============================== 00:10:01.299 Admin Commands 00:10:01.299 -------------- 00:10:01.299 Delete I/O Submission Queue (00h): Supported 00:10:01.299 Create I/O Submission Queue (01h): Supported 00:10:01.299 Get Log Page (02h): Supported 00:10:01.299 Delete I/O Completion Queue (04h): Supported 00:10:01.299 Create I/O Completion Queue (05h): Supported 00:10:01.299 Identify (06h): Supported 00:10:01.299 Abort (08h): Supported 00:10:01.299 Set Features (09h): Supported 00:10:01.299 Get Features (0Ah): Supported 00:10:01.299 Asynchronous Event Request (0Ch): Supported 00:10:01.299 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.299 Directive Send (19h): Supported 00:10:01.299 Directive Receive (1Ah): Supported 00:10:01.299 Virtualization Management (1Ch): Supported 00:10:01.299 Doorbell Buffer Config (7Ch): Supported 00:10:01.299 Format NVM (80h): Supported LBA-Change 00:10:01.299 I/O Commands 00:10:01.299 ------------ 00:10:01.299 Flush (00h): Supported LBA-Change 00:10:01.299 Write (01h): Supported LBA-Change 00:10:01.299 Read (02h): Supported 00:10:01.299 Compare (05h): Supported 00:10:01.299 Write Zeroes (08h): Supported LBA-Change 00:10:01.299 Dataset Management (09h): Supported LBA-Change 00:10:01.299 Unknown (0Ch): Supported 00:10:01.299 Unknown (12h): Supported 00:10:01.299 Copy (19h): Supported LBA-Change 00:10:01.299 Unknown (1Dh): Supported LBA-Change 00:10:01.299 00:10:01.299 Error Log 00:10:01.299 ========= 00:10:01.299 00:10:01.299 Arbitration 00:10:01.299 =========== 00:10:01.299 Arbitration Burst: no limit 00:10:01.299 00:10:01.299 Power Management 00:10:01.299 ================ 00:10:01.299 Number of Power States: 1 00:10:01.299 Current Power State: Power State #0 00:10:01.299 Power State #0: 00:10:01.299 Max Power: 25.00 W 00:10:01.299 Non-Operational State: Operational 00:10:01.299 Entry Latency: 16 microseconds 00:10:01.299 Exit Latency: 4 microseconds 00:10:01.299 Relative Read Throughput: 0 00:10:01.299 Relative Read Latency: 0 00:10:01.299 Relative Write Throughput: 0 00:10:01.299 Relative Write Latency: 0 00:10:01.299 Idle Power[2024-06-10 09:47:54.887836] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 64621 terminated unexpected 00:10:01.299 : Not Reported 00:10:01.299 Active Power: Not Reported 00:10:01.299 Non-Operational Permissive Mode: Not Supported 00:10:01.299 00:10:01.299 Health Information 00:10:01.299 ================== 00:10:01.299 Critical Warnings: 00:10:01.299 Available Spare Space: OK 00:10:01.299 Temperature: OK 00:10:01.299 Device Reliability: OK 00:10:01.299 Read Only: No 00:10:01.299 Volatile Memory Backup: OK 00:10:01.299 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.299 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.299 Available Spare: 0% 00:10:01.299 Available Spare Threshold: 0% 00:10:01.299 Life Percentage Used: 0% 00:10:01.299 Data Units Read: 1757 00:10:01.299 Data Units Written: 804 00:10:01.299 Host Read Commands: 86811 00:10:01.299 Host Write Commands: 42995 00:10:01.299 Controller Busy Time: 0 minutes 00:10:01.299 Power Cycles: 0 00:10:01.299 Power On Hours: 0 hours 00:10:01.299 Unsafe Shutdowns: 0 00:10:01.299 Unrecoverable Media Errors: 0 00:10:01.299 Lifetime Error Log Entries: 0 00:10:01.299 Warning Temperature Time: 0 minutes 00:10:01.299 Critical Temperature Time: 0 minutes 00:10:01.299 00:10:01.299 Number of Queues 00:10:01.299 ================ 00:10:01.299 Number of I/O Submission Queues: 64 00:10:01.299 Number of I/O Completion Queues: 64 00:10:01.299 00:10:01.299 ZNS Specific Controller Data 00:10:01.299 ============================ 00:10:01.299 Zone Append Size Limit: 0 00:10:01.299 00:10:01.299 00:10:01.299 Active Namespaces 00:10:01.299 ================= 00:10:01.299 Namespace ID:1 00:10:01.299 Error Recovery Timeout: Unlimited 00:10:01.299 Command Set Identifier: NVM (00h) 00:10:01.299 Deallocate: Supported 00:10:01.299 Deallocated/Unwritten Error: Supported 00:10:01.299 Deallocated Read Value: All 0x00 00:10:01.299 Deallocate in Write Zeroes: Not Supported 00:10:01.299 Deallocated Guard Field: 0xFFFF 00:10:01.299 Flush: Supported 00:10:01.299 Reservation: Not Supported 00:10:01.299 Metadata Transferred as: Separate Metadata Buffer 00:10:01.299 Namespace Sharing Capabilities: Private 00:10:01.299 Size (in LBAs): 1548666 (5GiB) 00:10:01.299 Capacity (in LBAs): 1548666 (5GiB) 00:10:01.299 Utilization (in LBAs): 1548666 (5GiB) 00:10:01.299 Thin Provisioning: Not Supported 00:10:01.299 Per-NS Atomic Units: No 00:10:01.299 Maximum Single Source Range Length: 128 00:10:01.299 Maximum Copy Length: 128 00:10:01.299 Maximum Source Range Count: 128 00:10:01.299 NGUID/EUI64 Never Reused: No 00:10:01.299 Namespace Write Protected: No 00:10:01.299 Number of LBA Formats: 8 00:10:01.299 Current LBA Format: LBA Format #07 00:10:01.299 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.299 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.299 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.299 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.299 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.299 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.299 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.299 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.299 00:10:01.299 ===================================================== 00:10:01.299 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:01.299 ===================================================== 00:10:01.299 Controller Capabilities/Features 00:10:01.299 ================================ 00:10:01.299 Vendor ID: 1b36 00:10:01.299 Subsystem Vendor ID: 1af4 00:10:01.299 Serial Number: 12341 00:10:01.299 Model Number: QEMU NVMe Ctrl 00:10:01.299 Firmware Version: 8.0.0 00:10:01.299 Recommended Arb Burst: 6 00:10:01.299 IEEE OUI Identifier: 00 54 52 00:10:01.299 Multi-path I/O 00:10:01.299 May have multiple subsystem ports: No 00:10:01.299 May have multiple controllers: No 00:10:01.299 Associated with SR-IOV VF: No 00:10:01.299 Max Data Transfer Size: 524288 00:10:01.299 Max Number of Namespaces: 256 00:10:01.299 Max Number of I/O Queues: 64 00:10:01.299 NVMe Specification Version (VS): 1.4 00:10:01.299 NVMe Specification Version (Identify): 1.4 00:10:01.299 Maximum Queue Entries: 2048 00:10:01.299 Contiguous Queues Required: Yes 00:10:01.299 Arbitration Mechanisms Supported 00:10:01.299 Weighted Round Robin: Not Supported 00:10:01.299 Vendor Specific: Not Supported 00:10:01.299 Reset Timeout: 7500 ms 00:10:01.299 Doorbell Stride: 4 bytes 00:10:01.299 NVM Subsystem Reset: Not Supported 00:10:01.299 Command Sets Supported 00:10:01.300 NVM Command Set: Supported 00:10:01.300 Boot Partition: Not Supported 00:10:01.300 Memory Page Size Minimum: 4096 bytes 00:10:01.300 Memory Page Size Maximum: 65536 bytes 00:10:01.300 Persistent Memory Region: Not Supported 00:10:01.300 Optional Asynchronous Events Supported 00:10:01.300 Namespace Attribute Notices: Supported 00:10:01.300 Firmware Activation Notices: Not Supported 00:10:01.300 ANA Change Notices: Not Supported 00:10:01.300 PLE Aggregate Log Change Notices: Not Supported 00:10:01.300 LBA Status Info Alert Notices: Not Supported 00:10:01.300 EGE Aggregate Log Change Notices: Not Supported 00:10:01.300 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.300 Zone Descriptor Change Notices: Not Supported 00:10:01.300 Discovery Log Change Notices: Not Supported 00:10:01.300 Controller Attributes 00:10:01.300 128-bit Host Identifier: Not Supported 00:10:01.300 Non-Operational Permissive Mode: Not Supported 00:10:01.300 NVM Sets: Not Supported 00:10:01.300 Read Recovery Levels: Not Supported 00:10:01.300 Endurance Groups: Not Supported 00:10:01.300 Predictable Latency Mode: Not Supported 00:10:01.300 Traffic Based Keep ALive: Not Supported 00:10:01.300 Namespace Granularity: Not Supported 00:10:01.300 SQ Associations: Not Supported 00:10:01.300 UUID List: Not Supported 00:10:01.300 Multi-Domain Subsystem: Not Supported 00:10:01.300 Fixed Capacity Management: Not Supported 00:10:01.300 Variable Capacity Management: Not Supported 00:10:01.300 Delete Endurance Group: Not Supported 00:10:01.300 Delete NVM Set: Not Supported 00:10:01.300 Extended LBA Formats Supported: Supported 00:10:01.300 Flexible Data Placement Supported: Not Supported 00:10:01.300 00:10:01.300 Controller Memory Buffer Support 00:10:01.300 ================================ 00:10:01.300 Supported: No 00:10:01.300 00:10:01.300 Persistent Memory Region Support 00:10:01.300 ================================ 00:10:01.300 Supported: No 00:10:01.300 00:10:01.300 Admin Command Set Attributes 00:10:01.300 ============================ 00:10:01.300 Security Send/Receive: Not Supported 00:10:01.300 Format NVM: Supported 00:10:01.300 Firmware Activate/Download: Not Supported 00:10:01.300 Namespace Management: Supported 00:10:01.300 Device Self-Test: Not Supported 00:10:01.300 Directives: Supported 00:10:01.300 NVMe-MI: Not Supported 00:10:01.300 Virtualization Management: Not Supported 00:10:01.300 Doorbell Buffer Config: Supported 00:10:01.300 Get LBA Status Capability: Not Supported 00:10:01.300 Command & Feature Lockdown Capability: Not Supported 00:10:01.300 Abort Command Limit: 4 00:10:01.300 Async Event Request Limit: 4 00:10:01.300 Number of Firmware Slots: N/A 00:10:01.300 Firmware Slot 1 Read-Only: N/A 00:10:01.300 Firmware Activation Without Reset: N/A 00:10:01.300 Multiple Update Detection Support: N/A 00:10:01.300 Firmware Update Granularity: No Information Provided 00:10:01.300 Per-Namespace SMART Log: Yes 00:10:01.300 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.300 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:01.300 Command Effects Log Page: Supported 00:10:01.300 Get Log Page Extended Data: Supported 00:10:01.300 Telemetry Log Pages: Not Supported 00:10:01.300 Persistent Event Log Pages: Not Supported 00:10:01.300 Supported Log Pages Log Page: May Support 00:10:01.300 Commands Supported & Effects Log Page: Not Supported 00:10:01.300 Feature Identifiers & Effects Log Page:May Support 00:10:01.300 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.300 Data Area 4 for Telemetry Log: Not Supported 00:10:01.300 Error Log Page Entries Supported: 1 00:10:01.300 Keep Alive: Not Supported 00:10:01.300 00:10:01.300 NVM Command Set Attributes 00:10:01.300 ========================== 00:10:01.300 Submission Queue Entry Size 00:10:01.300 Max: 64 00:10:01.300 Min: 64 00:10:01.300 Completion Queue Entry Size 00:10:01.300 Max: 16 00:10:01.300 Min: 16 00:10:01.300 Number of Namespaces: 256 00:10:01.300 Compare Command: Supported 00:10:01.300 Write Uncorrectable Command: Not Supported 00:10:01.300 Dataset Management Command: Supported 00:10:01.300 Write Zeroes Command: Supported 00:10:01.300 Set Features Save Field: Supported 00:10:01.300 Reservations: Not Supported 00:10:01.300 Timestamp: Supported 00:10:01.300 Copy: Supported 00:10:01.300 Volatile Write Cache: Present 00:10:01.300 Atomic Write Unit (Normal): 1 00:10:01.300 Atomic Write Unit (PFail): 1 00:10:01.300 Atomic Compare & Write Unit: 1 00:10:01.300 Fused Compare & Write: Not Supported 00:10:01.300 Scatter-Gather List 00:10:01.300 SGL Command Set: Supported 00:10:01.300 SGL Keyed: Not Supported 00:10:01.300 SGL Bit Bucket Descriptor: Not Supported 00:10:01.300 SGL Metadata Pointer: Not Supported 00:10:01.300 Oversized SGL: Not Supported 00:10:01.300 SGL Metadata Address: Not Supported 00:10:01.300 SGL Offset: Not Supported 00:10:01.300 Transport SGL Data Block: Not Supported 00:10:01.300 Replay Protected Memory Block: Not Supported 00:10:01.300 00:10:01.300 Firmware Slot Information 00:10:01.300 ========================= 00:10:01.300 Active slot: 1 00:10:01.300 Slot 1 Firmware Revision: 1.0 00:10:01.300 00:10:01.300 00:10:01.300 Commands Supported and Effects 00:10:01.300 ============================== 00:10:01.300 Admin Commands 00:10:01.300 -------------- 00:10:01.300 Delete I/O Submission Queue (00h): Supported 00:10:01.300 Create I/O Submission Queue (01h): Supported 00:10:01.300 Get Log Page (02h): Supported 00:10:01.300 Delete I/O Completion Queue (04h): Supported 00:10:01.300 Create I/O Completion Queue (05h): Supported 00:10:01.300 Identify (06h): Supported 00:10:01.300 Abort (08h): Supported 00:10:01.300 Set Features (09h): Supported 00:10:01.300 Get Features (0Ah): Supported 00:10:01.300 Asynchronous Event Request (0Ch): Supported 00:10:01.300 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.300 Directive Send (19h): Supported 00:10:01.300 Directive Receive (1Ah): Supported 00:10:01.300 Virtualization Management (1Ch): Supported 00:10:01.300 Doorbell Buffer Config (7Ch): Supported 00:10:01.300 Format NVM (80h): Supported LBA-Change 00:10:01.300 I/O Commands 00:10:01.300 ------------ 00:10:01.300 Flush (00h): Supported LBA-Change 00:10:01.300 Write (01h): Supported LBA-Change 00:10:01.300 Read (02h): Supported 00:10:01.300 Compare (05h): Supported 00:10:01.300 Write Zeroes (08h): Supported LBA-Change 00:10:01.300 Dataset Management (09h): Supported LBA-Change 00:10:01.300 Unknown (0Ch): Supported 00:10:01.300 Unknown (12h): Supported 00:10:01.300 Copy (19h): Supported LBA-Change 00:10:01.300 Unknown (1Dh): Supported LBA-Change 00:10:01.300 00:10:01.300 Error Log 00:10:01.300 ========= 00:10:01.300 00:10:01.300 Arbitration 00:10:01.300 =========== 00:10:01.300 Arbitration Burst: no limit 00:10:01.300 00:10:01.300 Power Management 00:10:01.300 ================ 00:10:01.300 Number of Power States: 1 00:10:01.300 Current Power State: Power State #0 00:10:01.300 Power State #0: 00:10:01.300 Max Power: 25.00 W 00:10:01.300 Non-Operational State: Operational 00:10:01.300 Entry Latency: 16 microseconds 00:10:01.300 Exit Latency: 4 microseconds 00:10:01.300 Relative Read Throughput: 0 00:10:01.300 Relative Read Latency: 0 00:10:01.300 Relative Write Throughput: 0 00:10:01.300 Relative Write Latency: 0 00:10:01.300 Idle Power: Not Reported 00:10:01.300 Active Power: Not Reported 00:10:01.300 Non-Operational Permissive Mode: Not Supported 00:10:01.300 00:10:01.300 Health Information 00:10:01.300 ================== 00:10:01.300 Critical Warnings: 00:10:01.300 Available Spare Space: OK 00:10:01.300 Temperature: OK 00:10:01.300 Device Reliability: OK 00:10:01.300 Read Only: No 00:10:01.300 Volatile Memory Backup: OK 00:10:01.300 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.300 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.300 Available Spare: 0% 00:10:01.300 Available Spare Threshold: 0% 00:10:01.300 Life Percentage Used: 0% 00:10:01.300 Data Units Read: 1198 00:10:01.300 Data Units Written: 556 00:10:01.300 Host Read Commands: 59719 00:10:01.300 Host Write Commands: 29381 00:10:01.300 Controller Busy Time: 0 minutes 00:10:01.300 Power Cycles: 0 00:10:01.301 Power On Hours: 0 hours 00:10:01.301 Unsafe Shutdowns: 0 00:10:01.301 Unrecoverable Media Errors: 0 00:10:01.301 Lifetime Error Log Entries: 0 00:10:01.301 Warning Temperature Time: 0 minutes 00:10:01.301 Critical Temperature Time: 0 minutes 00:10:01.301 00:10:01.301 Number of Queues 00:10:01.301 ================ 00:10:01.301 Number of I/O Submission Queues: 64 00:10:01.301 Number of I/O Completion Queues: 64 00:10:01.301 00:10:01.301 ZNS Specific Controller Data 00:10:01.301 ============================ 00:10:01.301 Zone Append Size Limit: 0 00:10:01.301 00:10:01.301 00:10:01.301 Active Namespaces 00:10:01.301 ================= 00:10:01.301 Namespace ID:1 00:10:01.301 Error Recovery Timeout: Unlimited 00:10:01.301 Command Set Identifier: [2024-06-10 09:47:54.888839] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 64621 terminated unexpected 00:10:01.301 NVM (00h) 00:10:01.301 Deallocate: Supported 00:10:01.301 Deallocated/Unwritten Error: Supported 00:10:01.301 Deallocated Read Value: All 0x00 00:10:01.301 Deallocate in Write Zeroes: Not Supported 00:10:01.301 Deallocated Guard Field: 0xFFFF 00:10:01.301 Flush: Supported 00:10:01.301 Reservation: Not Supported 00:10:01.301 Namespace Sharing Capabilities: Private 00:10:01.301 Size (in LBAs): 1310720 (5GiB) 00:10:01.301 Capacity (in LBAs): 1310720 (5GiB) 00:10:01.301 Utilization (in LBAs): 1310720 (5GiB) 00:10:01.301 Thin Provisioning: Not Supported 00:10:01.301 Per-NS Atomic Units: No 00:10:01.301 Maximum Single Source Range Length: 128 00:10:01.301 Maximum Copy Length: 128 00:10:01.301 Maximum Source Range Count: 128 00:10:01.301 NGUID/EUI64 Never Reused: No 00:10:01.301 Namespace Write Protected: No 00:10:01.301 Number of LBA Formats: 8 00:10:01.301 Current LBA Format: LBA Format #04 00:10:01.301 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.301 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.301 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.301 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.301 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.301 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.301 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.301 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.301 00:10:01.301 ===================================================== 00:10:01.301 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:01.301 ===================================================== 00:10:01.301 Controller Capabilities/Features 00:10:01.301 ================================ 00:10:01.301 Vendor ID: 1b36 00:10:01.301 Subsystem Vendor ID: 1af4 00:10:01.301 Serial Number: 12343 00:10:01.301 Model Number: QEMU NVMe Ctrl 00:10:01.301 Firmware Version: 8.0.0 00:10:01.301 Recommended Arb Burst: 6 00:10:01.301 IEEE OUI Identifier: 00 54 52 00:10:01.301 Multi-path I/O 00:10:01.301 May have multiple subsystem ports: No 00:10:01.301 May have multiple controllers: Yes 00:10:01.301 Associated with SR-IOV VF: No 00:10:01.301 Max Data Transfer Size: 524288 00:10:01.301 Max Number of Namespaces: 256 00:10:01.301 Max Number of I/O Queues: 64 00:10:01.301 NVMe Specification Version (VS): 1.4 00:10:01.301 NVMe Specification Version (Identify): 1.4 00:10:01.301 Maximum Queue Entries: 2048 00:10:01.301 Contiguous Queues Required: Yes 00:10:01.301 Arbitration Mechanisms Supported 00:10:01.301 Weighted Round Robin: Not Supported 00:10:01.301 Vendor Specific: Not Supported 00:10:01.301 Reset Timeout: 7500 ms 00:10:01.301 Doorbell Stride: 4 bytes 00:10:01.301 NVM Subsystem Reset: Not Supported 00:10:01.301 Command Sets Supported 00:10:01.301 NVM Command Set: Supported 00:10:01.301 Boot Partition: Not Supported 00:10:01.301 Memory Page Size Minimum: 4096 bytes 00:10:01.301 Memory Page Size Maximum: 65536 bytes 00:10:01.301 Persistent Memory Region: Not Supported 00:10:01.301 Optional Asynchronous Events Supported 00:10:01.301 Namespace Attribute Notices: Supported 00:10:01.301 Firmware Activation Notices: Not Supported 00:10:01.301 ANA Change Notices: Not Supported 00:10:01.301 PLE Aggregate Log Change Notices: Not Supported 00:10:01.301 LBA Status Info Alert Notices: Not Supported 00:10:01.301 EGE Aggregate Log Change Notices: Not Supported 00:10:01.301 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.301 Zone Descriptor Change Notices: Not Supported 00:10:01.301 Discovery Log Change Notices: Not Supported 00:10:01.301 Controller Attributes 00:10:01.301 128-bit Host Identifier: Not Supported 00:10:01.301 Non-Operational Permissive Mode: Not Supported 00:10:01.301 NVM Sets: Not Supported 00:10:01.301 Read Recovery Levels: Not Supported 00:10:01.301 Endurance Groups: Supported 00:10:01.301 Predictable Latency Mode: Not Supported 00:10:01.301 Traffic Based Keep ALive: Not Supported 00:10:01.301 Namespace Granularity: Not Supported 00:10:01.301 SQ Associations: Not Supported 00:10:01.301 UUID List: Not Supported 00:10:01.301 Multi-Domain Subsystem: Not Supported 00:10:01.301 Fixed Capacity Management: Not Supported 00:10:01.301 Variable Capacity Management: Not Supported 00:10:01.301 Delete Endurance Group: Not Supported 00:10:01.301 Delete NVM Set: Not Supported 00:10:01.301 Extended LBA Formats Supported: Supported 00:10:01.301 Flexible Data Placement Supported: Supported 00:10:01.301 00:10:01.301 Controller Memory Buffer Support 00:10:01.301 ================================ 00:10:01.301 Supported: No 00:10:01.301 00:10:01.301 Persistent Memory Region Support 00:10:01.301 ================================ 00:10:01.301 Supported: No 00:10:01.301 00:10:01.301 Admin Command Set Attributes 00:10:01.301 ============================ 00:10:01.301 Security Send/Receive: Not Supported 00:10:01.301 Format NVM: Supported 00:10:01.301 Firmware Activate/Download: Not Supported 00:10:01.301 Namespace Management: Supported 00:10:01.301 Device Self-Test: Not Supported 00:10:01.301 Directives: Supported 00:10:01.301 NVMe-MI: Not Supported 00:10:01.301 Virtualization Management: Not Supported 00:10:01.301 Doorbell Buffer Config: Supported 00:10:01.301 Get LBA Status Capability: Not Supported 00:10:01.301 Command & Feature Lockdown Capability: Not Supported 00:10:01.301 Abort Command Limit: 4 00:10:01.301 Async Event Request Limit: 4 00:10:01.301 Number of Firmware Slots: N/A 00:10:01.301 Firmware Slot 1 Read-Only: N/A 00:10:01.301 Firmware Activation Without Reset: N/A 00:10:01.301 Multiple Update Detection Support: N/A 00:10:01.301 Firmware Update Granularity: No Information Provided 00:10:01.301 Per-Namespace SMART Log: Yes 00:10:01.301 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.301 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:01.301 Command Effects Log Page: Supported 00:10:01.301 Get Log Page Extended Data: Supported 00:10:01.301 Telemetry Log Pages: Not Supported 00:10:01.301 Persistent Event Log Pages: Not Supported 00:10:01.301 Supported Log Pages Log Page: May Support 00:10:01.301 Commands Supported & Effects Log Page: Not Supported 00:10:01.301 Feature Identifiers & Effects Log Page:May Support 00:10:01.301 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.301 Data Area 4 for Telemetry Log: Not Supported 00:10:01.301 Error Log Page Entries Supported: 1 00:10:01.301 Keep Alive: Not Supported 00:10:01.301 00:10:01.301 NVM Command Set Attributes 00:10:01.301 ========================== 00:10:01.301 Submission Queue Entry Size 00:10:01.301 Max: 64 00:10:01.301 Min: 64 00:10:01.301 Completion Queue Entry Size 00:10:01.301 Max: 16 00:10:01.301 Min: 16 00:10:01.301 Number of Namespaces: 256 00:10:01.301 Compare Command: Supported 00:10:01.301 Write Uncorrectable Command: Not Supported 00:10:01.301 Dataset Management Command: Supported 00:10:01.301 Write Zeroes Command: Supported 00:10:01.301 Set Features Save Field: Supported 00:10:01.301 Reservations: Not Supported 00:10:01.301 Timestamp: Supported 00:10:01.301 Copy: Supported 00:10:01.301 Volatile Write Cache: Present 00:10:01.301 Atomic Write Unit (Normal): 1 00:10:01.301 Atomic Write Unit (PFail): 1 00:10:01.301 Atomic Compare & Write Unit: 1 00:10:01.301 Fused Compare & Write: Not Supported 00:10:01.301 Scatter-Gather List 00:10:01.301 SGL Command Set: Supported 00:10:01.301 SGL Keyed: Not Supported 00:10:01.301 SGL Bit Bucket Descriptor: Not Supported 00:10:01.301 SGL Metadata Pointer: Not Supported 00:10:01.301 Oversized SGL: Not Supported 00:10:01.302 SGL Metadata Address: Not Supported 00:10:01.302 SGL Offset: Not Supported 00:10:01.302 Transport SGL Data Block: Not Supported 00:10:01.302 Replay Protected Memory Block: Not Supported 00:10:01.302 00:10:01.302 Firmware Slot Information 00:10:01.302 ========================= 00:10:01.302 Active slot: 1 00:10:01.302 Slot 1 Firmware Revision: 1.0 00:10:01.302 00:10:01.302 00:10:01.302 Commands Supported and Effects 00:10:01.302 ============================== 00:10:01.302 Admin Commands 00:10:01.302 -------------- 00:10:01.302 Delete I/O Submission Queue (00h): Supported 00:10:01.302 Create I/O Submission Queue (01h): Supported 00:10:01.302 Get Log Page (02h): Supported 00:10:01.302 Delete I/O Completion Queue (04h): Supported 00:10:01.302 Create I/O Completion Queue (05h): Supported 00:10:01.302 Identify (06h): Supported 00:10:01.302 Abort (08h): Supported 00:10:01.302 Set Features (09h): Supported 00:10:01.302 Get Features (0Ah): Supported 00:10:01.302 Asynchronous Event Request (0Ch): Supported 00:10:01.302 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.302 Directive Send (19h): Supported 00:10:01.302 Directive Receive (1Ah): Supported 00:10:01.302 Virtualization Management (1Ch): Supported 00:10:01.302 Doorbell Buffer Config (7Ch): Supported 00:10:01.302 Format NVM (80h): Supported LBA-Change 00:10:01.302 I/O Commands 00:10:01.302 ------------ 00:10:01.302 Flush (00h): Supported LBA-Change 00:10:01.302 Write (01h): Supported LBA-Change 00:10:01.302 Read (02h): Supported 00:10:01.302 Compare (05h): Supported 00:10:01.302 Write Zeroes (08h): Supported LBA-Change 00:10:01.302 Dataset Management (09h): Supported LBA-Change 00:10:01.302 Unknown (0Ch): Supported 00:10:01.302 Unknown (12h): Supported 00:10:01.302 Copy (19h): Supported LBA-Change 00:10:01.302 Unknown (1Dh): Supported LBA-Change 00:10:01.302 00:10:01.302 Error Log 00:10:01.302 ========= 00:10:01.302 00:10:01.302 Arbitration 00:10:01.302 =========== 00:10:01.302 Arbitration Burst: no limit 00:10:01.302 00:10:01.302 Power Management 00:10:01.302 ================ 00:10:01.302 Number of Power States: 1 00:10:01.302 Current Power State: Power State #0 00:10:01.302 Power State #0: 00:10:01.302 Max Power: 25.00 W 00:10:01.302 Non-Operational State: Operational 00:10:01.302 Entry Latency: 16 microseconds 00:10:01.302 Exit Latency: 4 microseconds 00:10:01.302 Relative Read Throughput: 0 00:10:01.302 Relative Read Latency: 0 00:10:01.302 Relative Write Throughput: 0 00:10:01.302 Relative Write Latency: 0 00:10:01.302 Idle Power: Not Reported 00:10:01.302 Active Power: Not Reported 00:10:01.302 Non-Operational Permissive Mode: Not Supported 00:10:01.302 00:10:01.302 Health Information 00:10:01.302 ================== 00:10:01.302 Critical Warnings: 00:10:01.302 Available Spare Space: OK 00:10:01.302 Temperature: OK 00:10:01.302 Device Reliability: OK 00:10:01.302 Read Only: No 00:10:01.302 Volatile Memory Backup: OK 00:10:01.302 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.302 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.302 Available Spare: 0% 00:10:01.302 Available Spare Threshold: 0% 00:10:01.302 Life Percentage Used: 0% 00:10:01.302 Data Units Read: 1249 00:10:01.302 Data Units Written: 594 00:10:01.302 Host Read Commands: 59718 00:10:01.302 Host Write Commands: 29763 00:10:01.302 Controller Busy Time: 0 minutes 00:10:01.302 Power Cycles: 0 00:10:01.302 Power On Hours: 0 hours 00:10:01.302 Unsafe Shutdowns: 0 00:10:01.302 Unrecoverable Media Errors: 0 00:10:01.302 Lifetime Error Log Entries: 0 00:10:01.302 Warning Temperature Time: 0 minutes 00:10:01.302 Critical Temperature Time: 0 minutes 00:10:01.302 00:10:01.302 Number of Queues 00:10:01.302 ================ 00:10:01.302 Number of I/O Submission Queues: 64 00:10:01.302 Number of I/O Completion Queues: 64 00:10:01.302 00:10:01.302 ZNS Specific Controller Data 00:10:01.302 ============================ 00:10:01.302 Zone Append Size Limit: 0 00:10:01.302 00:10:01.302 00:10:01.302 Active Namespaces 00:10:01.302 ================= 00:10:01.302 Namespace ID:1 00:10:01.302 Error Recovery Timeout: Unlimited 00:10:01.302 Command Set Identifier: NVM (00h) 00:10:01.302 Deallocate: Supported 00:10:01.302 Deallocated/Unwritten Error: Supported 00:10:01.302 Deallocated Read Value: All 0x00 00:10:01.302 Deallocate in Write Zeroes: Not Supported 00:10:01.302 Deallocated Guard Field: 0xFFFF 00:10:01.302 Flush: Supported 00:10:01.302 Reservation: Not Supported 00:10:01.302 Namespace Sharing Capabilities: Multiple Controllers 00:10:01.302 Size (in LBAs): 262144 (1GiB) 00:10:01.302 Capacity (in LBAs): 262144 (1GiB) 00:10:01.302 Utilization (in LBAs): 262144 (1GiB) 00:10:01.302 Thin Provisioning: Not Supported 00:10:01.302 Per-NS Atomic Units: No 00:10:01.302 Maximum Single Source Range Length: 128 00:10:01.302 Maximum Copy Length: 128 00:10:01.302 Maximum Source Range Count: 128 00:10:01.302 NGUID/EUI64 Never Reused: No 00:10:01.302 Namespace Write Protected: No 00:10:01.302 Endurance group ID: 1 00:10:01.302 Number of LBA Formats: 8 00:10:01.302 Current LBA Format: LBA Format #04 00:10:01.302 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.302 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.302 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.302 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.302 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.302 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.302 LBA Format #06: Data Si[2024-06-10 09:47:54.890594] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 64621 terminated unexpected 00:10:01.302 ze: 4096 Metadata Size: 16 00:10:01.302 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.302 00:10:01.302 Get Feature FDP: 00:10:01.302 ================ 00:10:01.302 Enabled: Yes 00:10:01.302 FDP configuration index: 0 00:10:01.302 00:10:01.302 FDP configurations log page 00:10:01.302 =========================== 00:10:01.302 Number of FDP configurations: 1 00:10:01.302 Version: 0 00:10:01.302 Size: 112 00:10:01.302 FDP Configuration Descriptor: 0 00:10:01.302 Descriptor Size: 96 00:10:01.302 Reclaim Group Identifier format: 2 00:10:01.302 FDP Volatile Write Cache: Not Present 00:10:01.302 FDP Configuration: Valid 00:10:01.302 Vendor Specific Size: 0 00:10:01.302 Number of Reclaim Groups: 2 00:10:01.302 Number of Recalim Unit Handles: 8 00:10:01.302 Max Placement Identifiers: 128 00:10:01.302 Number of Namespaces Suppprted: 256 00:10:01.302 Reclaim unit Nominal Size: 6000000 bytes 00:10:01.302 Estimated Reclaim Unit Time Limit: Not Reported 00:10:01.302 RUH Desc #000: RUH Type: Initially Isolated 00:10:01.302 RUH Desc #001: RUH Type: Initially Isolated 00:10:01.302 RUH Desc #002: RUH Type: Initially Isolated 00:10:01.302 RUH Desc #003: RUH Type: Initially Isolated 00:10:01.302 RUH Desc #004: RUH Type: Initially Isolated 00:10:01.302 RUH Desc #005: RUH Type: Initially Isolated 00:10:01.302 RUH Desc #006: RUH Type: Initially Isolated 00:10:01.302 RUH Desc #007: RUH Type: Initially Isolated 00:10:01.302 00:10:01.302 FDP reclaim unit handle usage log page 00:10:01.302 ====================================== 00:10:01.302 Number of Reclaim Unit Handles: 8 00:10:01.302 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:01.302 RUH Usage Desc #001: RUH Attributes: Unused 00:10:01.302 RUH Usage Desc #002: RUH Attributes: Unused 00:10:01.302 RUH Usage Desc #003: RUH Attributes: Unused 00:10:01.302 RUH Usage Desc #004: RUH Attributes: Unused 00:10:01.302 RUH Usage Desc #005: RUH Attributes: Unused 00:10:01.302 RUH Usage Desc #006: RUH Attributes: Unused 00:10:01.302 RUH Usage Desc #007: RUH Attributes: Unused 00:10:01.302 00:10:01.302 FDP statistics log page 00:10:01.302 ======================= 00:10:01.302 Host bytes with metadata written: 387362816 00:10:01.302 Media bytes with metadata written: 387448832 00:10:01.302 Media bytes erased: 0 00:10:01.302 00:10:01.302 FDP events log page 00:10:01.302 =================== 00:10:01.302 Number of FDP events: 0 00:10:01.302 00:10:01.302 ===================================================== 00:10:01.302 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:01.303 ===================================================== 00:10:01.303 Controller Capabilities/Features 00:10:01.303 ================================ 00:10:01.303 Vendor ID: 1b36 00:10:01.303 Subsystem Vendor ID: 1af4 00:10:01.303 Serial Number: 12342 00:10:01.303 Model Number: QEMU NVMe Ctrl 00:10:01.303 Firmware Version: 8.0.0 00:10:01.303 Recommended Arb Burst: 6 00:10:01.303 IEEE OUI Identifier: 00 54 52 00:10:01.303 Multi-path I/O 00:10:01.303 May have multiple subsystem ports: No 00:10:01.303 May have multiple controllers: No 00:10:01.303 Associated with SR-IOV VF: No 00:10:01.303 Max Data Transfer Size: 524288 00:10:01.303 Max Number of Namespaces: 256 00:10:01.303 Max Number of I/O Queues: 64 00:10:01.303 NVMe Specification Version (VS): 1.4 00:10:01.303 NVMe Specification Version (Identify): 1.4 00:10:01.303 Maximum Queue Entries: 2048 00:10:01.303 Contiguous Queues Required: Yes 00:10:01.303 Arbitration Mechanisms Supported 00:10:01.303 Weighted Round Robin: Not Supported 00:10:01.303 Vendor Specific: Not Supported 00:10:01.303 Reset Timeout: 7500 ms 00:10:01.303 Doorbell Stride: 4 bytes 00:10:01.303 NVM Subsystem Reset: Not Supported 00:10:01.303 Command Sets Supported 00:10:01.303 NVM Command Set: Supported 00:10:01.303 Boot Partition: Not Supported 00:10:01.303 Memory Page Size Minimum: 4096 bytes 00:10:01.303 Memory Page Size Maximum: 65536 bytes 00:10:01.303 Persistent Memory Region: Not Supported 00:10:01.303 Optional Asynchronous Events Supported 00:10:01.303 Namespace Attribute Notices: Supported 00:10:01.303 Firmware Activation Notices: Not Supported 00:10:01.303 ANA Change Notices: Not Supported 00:10:01.303 PLE Aggregate Log Change Notices: Not Supported 00:10:01.303 LBA Status Info Alert Notices: Not Supported 00:10:01.303 EGE Aggregate Log Change Notices: Not Supported 00:10:01.303 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.303 Zone Descriptor Change Notices: Not Supported 00:10:01.303 Discovery Log Change Notices: Not Supported 00:10:01.303 Controller Attributes 00:10:01.303 128-bit Host Identifier: Not Supported 00:10:01.303 Non-Operational Permissive Mode: Not Supported 00:10:01.303 NVM Sets: Not Supported 00:10:01.303 Read Recovery Levels: Not Supported 00:10:01.303 Endurance Groups: Not Supported 00:10:01.303 Predictable Latency Mode: Not Supported 00:10:01.303 Traffic Based Keep ALive: Not Supported 00:10:01.303 Namespace Granularity: Not Supported 00:10:01.303 SQ Associations: Not Supported 00:10:01.303 UUID List: Not Supported 00:10:01.303 Multi-Domain Subsystem: Not Supported 00:10:01.303 Fixed Capacity Management: Not Supported 00:10:01.303 Variable Capacity Management: Not Supported 00:10:01.303 Delete Endurance Group: Not Supported 00:10:01.303 Delete NVM Set: Not Supported 00:10:01.303 Extended LBA Formats Supported: Supported 00:10:01.303 Flexible Data Placement Supported: Not Supported 00:10:01.303 00:10:01.303 Controller Memory Buffer Support 00:10:01.303 ================================ 00:10:01.303 Supported: No 00:10:01.303 00:10:01.303 Persistent Memory Region Support 00:10:01.303 ================================ 00:10:01.303 Supported: No 00:10:01.303 00:10:01.303 Admin Command Set Attributes 00:10:01.303 ============================ 00:10:01.303 Security Send/Receive: Not Supported 00:10:01.303 Format NVM: Supported 00:10:01.303 Firmware Activate/Download: Not Supported 00:10:01.303 Namespace Management: Supported 00:10:01.303 Device Self-Test: Not Supported 00:10:01.303 Directives: Supported 00:10:01.303 NVMe-MI: Not Supported 00:10:01.303 Virtualization Management: Not Supported 00:10:01.303 Doorbell Buffer Config: Supported 00:10:01.303 Get LBA Status Capability: Not Supported 00:10:01.303 Command & Feature Lockdown Capability: Not Supported 00:10:01.303 Abort Command Limit: 4 00:10:01.303 Async Event Request Limit: 4 00:10:01.303 Number of Firmware Slots: N/A 00:10:01.303 Firmware Slot 1 Read-Only: N/A 00:10:01.303 Firmware Activation Without Reset: N/A 00:10:01.303 Multiple Update Detection Support: N/A 00:10:01.303 Firmware Update Granularity: No Information Provided 00:10:01.303 Per-Namespace SMART Log: Yes 00:10:01.303 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.303 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:01.303 Command Effects Log Page: Supported 00:10:01.303 Get Log Page Extended Data: Supported 00:10:01.303 Telemetry Log Pages: Not Supported 00:10:01.303 Persistent Event Log Pages: Not Supported 00:10:01.303 Supported Log Pages Log Page: May Support 00:10:01.303 Commands Supported & Effects Log Page: Not Supported 00:10:01.303 Feature Identifiers & Effects Log Page:May Support 00:10:01.303 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.303 Data Area 4 for Telemetry Log: Not Supported 00:10:01.303 Error Log Page Entries Supported: 1 00:10:01.303 Keep Alive: Not Supported 00:10:01.303 00:10:01.303 NVM Command Set Attributes 00:10:01.303 ========================== 00:10:01.303 Submission Queue Entry Size 00:10:01.303 Max: 64 00:10:01.303 Min: 64 00:10:01.303 Completion Queue Entry Size 00:10:01.303 Max: 16 00:10:01.303 Min: 16 00:10:01.303 Number of Namespaces: 256 00:10:01.303 Compare Command: Supported 00:10:01.303 Write Uncorrectable Command: Not Supported 00:10:01.303 Dataset Management Command: Supported 00:10:01.303 Write Zeroes Command: Supported 00:10:01.303 Set Features Save Field: Supported 00:10:01.303 Reservations: Not Supported 00:10:01.303 Timestamp: Supported 00:10:01.303 Copy: Supported 00:10:01.303 Volatile Write Cache: Present 00:10:01.303 Atomic Write Unit (Normal): 1 00:10:01.303 Atomic Write Unit (PFail): 1 00:10:01.303 Atomic Compare & Write Unit: 1 00:10:01.303 Fused Compare & Write: Not Supported 00:10:01.303 Scatter-Gather List 00:10:01.303 SGL Command Set: Supported 00:10:01.303 SGL Keyed: Not Supported 00:10:01.303 SGL Bit Bucket Descriptor: Not Supported 00:10:01.303 SGL Metadata Pointer: Not Supported 00:10:01.303 Oversized SGL: Not Supported 00:10:01.303 SGL Metadata Address: Not Supported 00:10:01.303 SGL Offset: Not Supported 00:10:01.303 Transport SGL Data Block: Not Supported 00:10:01.303 Replay Protected Memory Block: Not Supported 00:10:01.303 00:10:01.303 Firmware Slot Information 00:10:01.303 ========================= 00:10:01.303 Active slot: 1 00:10:01.303 Slot 1 Firmware Revision: 1.0 00:10:01.303 00:10:01.303 00:10:01.303 Commands Supported and Effects 00:10:01.303 ============================== 00:10:01.303 Admin Commands 00:10:01.303 -------------- 00:10:01.303 Delete I/O Submission Queue (00h): Supported 00:10:01.303 Create I/O Submission Queue (01h): Supported 00:10:01.303 Get Log Page (02h): Supported 00:10:01.303 Delete I/O Completion Queue (04h): Supported 00:10:01.303 Create I/O Completion Queue (05h): Supported 00:10:01.303 Identify (06h): Supported 00:10:01.303 Abort (08h): Supported 00:10:01.303 Set Features (09h): Supported 00:10:01.303 Get Features (0Ah): Supported 00:10:01.303 Asynchronous Event Request (0Ch): Supported 00:10:01.303 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.304 Directive Send (19h): Supported 00:10:01.304 Directive Receive (1Ah): Supported 00:10:01.304 Virtualization Management (1Ch): Supported 00:10:01.304 Doorbell Buffer Config (7Ch): Supported 00:10:01.304 Format NVM (80h): Supported LBA-Change 00:10:01.304 I/O Commands 00:10:01.304 ------------ 00:10:01.304 Flush (00h): Supported LBA-Change 00:10:01.304 Write (01h): Supported LBA-Change 00:10:01.304 Read (02h): Supported 00:10:01.304 Compare (05h): Supported 00:10:01.304 Write Zeroes (08h): Supported LBA-Change 00:10:01.304 Dataset Management (09h): Supported LBA-Change 00:10:01.304 Unknown (0Ch): Supported 00:10:01.304 Unknown (12h): Supported 00:10:01.304 Copy (19h): Supported LBA-Change 00:10:01.304 Unknown (1Dh): Supported LBA-Change 00:10:01.304 00:10:01.304 Error Log 00:10:01.304 ========= 00:10:01.304 00:10:01.304 Arbitration 00:10:01.304 =========== 00:10:01.304 Arbitration Burst: no limit 00:10:01.304 00:10:01.304 Power Management 00:10:01.304 ================ 00:10:01.304 Number of Power States: 1 00:10:01.304 Current Power State: Power State #0 00:10:01.304 Power State #0: 00:10:01.304 Max Power: 25.00 W 00:10:01.304 Non-Operational State: Operational 00:10:01.304 Entry Latency: 16 microseconds 00:10:01.304 Exit Latency: 4 microseconds 00:10:01.304 Relative Read Throughput: 0 00:10:01.304 Relative Read Latency: 0 00:10:01.304 Relative Write Throughput: 0 00:10:01.304 Relative Write Latency: 0 00:10:01.304 Idle Power: Not Reported 00:10:01.304 Active Power: Not Reported 00:10:01.304 Non-Operational Permissive Mode: Not Supported 00:10:01.304 00:10:01.304 Health Information 00:10:01.304 ================== 00:10:01.304 Critical Warnings: 00:10:01.304 Available Spare Space: OK 00:10:01.304 Temperature: OK 00:10:01.304 Device Reliability: OK 00:10:01.304 Read Only: No 00:10:01.304 Volatile Memory Backup: OK 00:10:01.304 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.304 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.304 Available Spare: 0% 00:10:01.304 Available Spare Threshold: 0% 00:10:01.304 Life Percentage Used: 0% 00:10:01.304 Data Units Read: 3683 00:10:01.304 Data Units Written: 1703 00:10:01.304 Host Read Commands: 180316 00:10:01.304 Host Write Commands: 88612 00:10:01.304 Controller Busy Time: 0 minutes 00:10:01.304 Power Cycles: 0 00:10:01.304 Power On Hours: 0 hours 00:10:01.304 Unsafe Shutdowns: 0 00:10:01.304 Unrecoverable Media Errors: 0 00:10:01.304 Lifetime Error Log Entries: 0 00:10:01.304 Warning Temperature Time: 0 minutes 00:10:01.304 Critical Temperature Time: 0 minutes 00:10:01.304 00:10:01.304 Number of Queues 00:10:01.304 ================ 00:10:01.304 Number of I/O Submission Queues: 64 00:10:01.304 Number of I/O Completion Queues: 64 00:10:01.304 00:10:01.304 ZNS Specific Controller Data 00:10:01.304 ============================ 00:10:01.304 Zone Append Size Limit: 0 00:10:01.304 00:10:01.304 00:10:01.304 Active Namespaces 00:10:01.304 ================= 00:10:01.304 Namespace ID:1 00:10:01.304 Error Recovery Timeout: Unlimited 00:10:01.304 Command Set Identifier: NVM (00h) 00:10:01.304 Deallocate: Supported 00:10:01.304 Deallocated/Unwritten Error: Supported 00:10:01.304 Deallocated Read Value: All 0x00 00:10:01.304 Deallocate in Write Zeroes: Not Supported 00:10:01.304 Deallocated Guard Field: 0xFFFF 00:10:01.304 Flush: Supported 00:10:01.304 Reservation: Not Supported 00:10:01.304 Namespace Sharing Capabilities: Private 00:10:01.304 Size (in LBAs): 1048576 (4GiB) 00:10:01.304 Capacity (in LBAs): 1048576 (4GiB) 00:10:01.304 Utilization (in LBAs): 1048576 (4GiB) 00:10:01.304 Thin Provisioning: Not Supported 00:10:01.304 Per-NS Atomic Units: No 00:10:01.304 Maximum Single Source Range Length: 128 00:10:01.304 Maximum Copy Length: 128 00:10:01.304 Maximum Source Range Count: 128 00:10:01.304 NGUID/EUI64 Never Reused: No 00:10:01.304 Namespace Write Protected: No 00:10:01.304 Number of LBA Formats: 8 00:10:01.304 Current LBA Format: LBA Format #04 00:10:01.304 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.304 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.304 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.304 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.304 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.304 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.304 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.304 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.304 00:10:01.304 Namespace ID:2 00:10:01.304 Error Recovery Timeout: Unlimited 00:10:01.304 Command Set Identifier: NVM (00h) 00:10:01.304 Deallocate: Supported 00:10:01.304 Deallocated/Unwritten Error: Supported 00:10:01.304 Deallocated Read Value: All 0x00 00:10:01.304 Deallocate in Write Zeroes: Not Supported 00:10:01.304 Deallocated Guard Field: 0xFFFF 00:10:01.304 Flush: Supported 00:10:01.304 Reservation: Not Supported 00:10:01.304 Namespace Sharing Capabilities: Private 00:10:01.304 Size (in LBAs): 1048576 (4GiB) 00:10:01.304 Capacity (in LBAs): 1048576 (4GiB) 00:10:01.304 Utilization (in LBAs): 1048576 (4GiB) 00:10:01.304 Thin Provisioning: Not Supported 00:10:01.304 Per-NS Atomic Units: No 00:10:01.304 Maximum Single Source Range Length: 128 00:10:01.304 Maximum Copy Length: 128 00:10:01.304 Maximum Source Range Count: 128 00:10:01.304 NGUID/EUI64 Never Reused: No 00:10:01.304 Namespace Write Protected: No 00:10:01.304 Number of LBA Formats: 8 00:10:01.304 Current LBA Format: LBA Format #04 00:10:01.304 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.304 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.304 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.304 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.304 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.304 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.304 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.304 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.304 00:10:01.304 Namespace ID:3 00:10:01.304 Error Recovery Timeout: Unlimited 00:10:01.304 Command Set Identifier: NVM (00h) 00:10:01.304 Deallocate: Supported 00:10:01.304 Deallocated/Unwritten Error: Supported 00:10:01.304 Deallocated Read Value: All 0x00 00:10:01.304 Deallocate in Write Zeroes: Not Supported 00:10:01.304 Deallocated Guard Field: 0xFFFF 00:10:01.304 Flush: Supported 00:10:01.304 Reservation: Not Supported 00:10:01.304 Namespace Sharing Capabilities: Private 00:10:01.304 Size (in LBAs): 1048576 (4GiB) 00:10:01.304 Capacity (in LBAs): 1048576 (4GiB) 00:10:01.304 Utilization (in LBAs): 1048576 (4GiB) 00:10:01.304 Thin Provisioning: Not Supported 00:10:01.304 Per-NS Atomic Units: No 00:10:01.304 Maximum Single Source Range Length: 128 00:10:01.304 Maximum Copy Length: 128 00:10:01.304 Maximum Source Range Count: 128 00:10:01.304 NGUID/EUI64 Never Reused: No 00:10:01.304 Namespace Write Protected: No 00:10:01.304 Number of LBA Formats: 8 00:10:01.304 Current LBA Format: LBA Format #04 00:10:01.304 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.304 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.304 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.304 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.304 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.304 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.304 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.304 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.304 00:10:01.304 09:47:54 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:01.304 09:47:54 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:10:01.564 ===================================================== 00:10:01.564 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:01.564 ===================================================== 00:10:01.564 Controller Capabilities/Features 00:10:01.564 ================================ 00:10:01.564 Vendor ID: 1b36 00:10:01.564 Subsystem Vendor ID: 1af4 00:10:01.564 Serial Number: 12340 00:10:01.564 Model Number: QEMU NVMe Ctrl 00:10:01.564 Firmware Version: 8.0.0 00:10:01.564 Recommended Arb Burst: 6 00:10:01.564 IEEE OUI Identifier: 00 54 52 00:10:01.564 Multi-path I/O 00:10:01.564 May have multiple subsystem ports: No 00:10:01.564 May have multiple controllers: No 00:10:01.564 Associated with SR-IOV VF: No 00:10:01.564 Max Data Transfer Size: 524288 00:10:01.564 Max Number of Namespaces: 256 00:10:01.564 Max Number of I/O Queues: 64 00:10:01.564 NVMe Specification Version (VS): 1.4 00:10:01.564 NVMe Specification Version (Identify): 1.4 00:10:01.564 Maximum Queue Entries: 2048 00:10:01.564 Contiguous Queues Required: Yes 00:10:01.564 Arbitration Mechanisms Supported 00:10:01.564 Weighted Round Robin: Not Supported 00:10:01.564 Vendor Specific: Not Supported 00:10:01.564 Reset Timeout: 7500 ms 00:10:01.564 Doorbell Stride: 4 bytes 00:10:01.564 NVM Subsystem Reset: Not Supported 00:10:01.564 Command Sets Supported 00:10:01.564 NVM Command Set: Supported 00:10:01.564 Boot Partition: Not Supported 00:10:01.564 Memory Page Size Minimum: 4096 bytes 00:10:01.564 Memory Page Size Maximum: 65536 bytes 00:10:01.564 Persistent Memory Region: Not Supported 00:10:01.564 Optional Asynchronous Events Supported 00:10:01.564 Namespace Attribute Notices: Supported 00:10:01.564 Firmware Activation Notices: Not Supported 00:10:01.564 ANA Change Notices: Not Supported 00:10:01.564 PLE Aggregate Log Change Notices: Not Supported 00:10:01.564 LBA Status Info Alert Notices: Not Supported 00:10:01.564 EGE Aggregate Log Change Notices: Not Supported 00:10:01.564 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.564 Zone Descriptor Change Notices: Not Supported 00:10:01.564 Discovery Log Change Notices: Not Supported 00:10:01.564 Controller Attributes 00:10:01.564 128-bit Host Identifier: Not Supported 00:10:01.564 Non-Operational Permissive Mode: Not Supported 00:10:01.564 NVM Sets: Not Supported 00:10:01.564 Read Recovery Levels: Not Supported 00:10:01.564 Endurance Groups: Not Supported 00:10:01.564 Predictable Latency Mode: Not Supported 00:10:01.564 Traffic Based Keep ALive: Not Supported 00:10:01.564 Namespace Granularity: Not Supported 00:10:01.564 SQ Associations: Not Supported 00:10:01.565 UUID List: Not Supported 00:10:01.565 Multi-Domain Subsystem: Not Supported 00:10:01.565 Fixed Capacity Management: Not Supported 00:10:01.565 Variable Capacity Management: Not Supported 00:10:01.565 Delete Endurance Group: Not Supported 00:10:01.565 Delete NVM Set: Not Supported 00:10:01.565 Extended LBA Formats Supported: Supported 00:10:01.565 Flexible Data Placement Supported: Not Supported 00:10:01.565 00:10:01.565 Controller Memory Buffer Support 00:10:01.565 ================================ 00:10:01.565 Supported: No 00:10:01.565 00:10:01.565 Persistent Memory Region Support 00:10:01.565 ================================ 00:10:01.565 Supported: No 00:10:01.565 00:10:01.565 Admin Command Set Attributes 00:10:01.565 ============================ 00:10:01.565 Security Send/Receive: Not Supported 00:10:01.565 Format NVM: Supported 00:10:01.565 Firmware Activate/Download: Not Supported 00:10:01.565 Namespace Management: Supported 00:10:01.565 Device Self-Test: Not Supported 00:10:01.565 Directives: Supported 00:10:01.565 NVMe-MI: Not Supported 00:10:01.565 Virtualization Management: Not Supported 00:10:01.565 Doorbell Buffer Config: Supported 00:10:01.565 Get LBA Status Capability: Not Supported 00:10:01.565 Command & Feature Lockdown Capability: Not Supported 00:10:01.565 Abort Command Limit: 4 00:10:01.565 Async Event Request Limit: 4 00:10:01.565 Number of Firmware Slots: N/A 00:10:01.565 Firmware Slot 1 Read-Only: N/A 00:10:01.565 Firmware Activation Without Reset: N/A 00:10:01.565 Multiple Update Detection Support: N/A 00:10:01.565 Firmware Update Granularity: No Information Provided 00:10:01.565 Per-Namespace SMART Log: Yes 00:10:01.565 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.565 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:01.565 Command Effects Log Page: Supported 00:10:01.565 Get Log Page Extended Data: Supported 00:10:01.565 Telemetry Log Pages: Not Supported 00:10:01.565 Persistent Event Log Pages: Not Supported 00:10:01.565 Supported Log Pages Log Page: May Support 00:10:01.565 Commands Supported & Effects Log Page: Not Supported 00:10:01.565 Feature Identifiers & Effects Log Page:May Support 00:10:01.565 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.565 Data Area 4 for Telemetry Log: Not Supported 00:10:01.565 Error Log Page Entries Supported: 1 00:10:01.565 Keep Alive: Not Supported 00:10:01.565 00:10:01.565 NVM Command Set Attributes 00:10:01.565 ========================== 00:10:01.565 Submission Queue Entry Size 00:10:01.565 Max: 64 00:10:01.565 Min: 64 00:10:01.565 Completion Queue Entry Size 00:10:01.565 Max: 16 00:10:01.565 Min: 16 00:10:01.565 Number of Namespaces: 256 00:10:01.565 Compare Command: Supported 00:10:01.565 Write Uncorrectable Command: Not Supported 00:10:01.565 Dataset Management Command: Supported 00:10:01.565 Write Zeroes Command: Supported 00:10:01.565 Set Features Save Field: Supported 00:10:01.565 Reservations: Not Supported 00:10:01.565 Timestamp: Supported 00:10:01.565 Copy: Supported 00:10:01.565 Volatile Write Cache: Present 00:10:01.565 Atomic Write Unit (Normal): 1 00:10:01.565 Atomic Write Unit (PFail): 1 00:10:01.565 Atomic Compare & Write Unit: 1 00:10:01.565 Fused Compare & Write: Not Supported 00:10:01.565 Scatter-Gather List 00:10:01.565 SGL Command Set: Supported 00:10:01.565 SGL Keyed: Not Supported 00:10:01.565 SGL Bit Bucket Descriptor: Not Supported 00:10:01.565 SGL Metadata Pointer: Not Supported 00:10:01.565 Oversized SGL: Not Supported 00:10:01.565 SGL Metadata Address: Not Supported 00:10:01.565 SGL Offset: Not Supported 00:10:01.565 Transport SGL Data Block: Not Supported 00:10:01.565 Replay Protected Memory Block: Not Supported 00:10:01.565 00:10:01.565 Firmware Slot Information 00:10:01.565 ========================= 00:10:01.565 Active slot: 1 00:10:01.565 Slot 1 Firmware Revision: 1.0 00:10:01.565 00:10:01.565 00:10:01.565 Commands Supported and Effects 00:10:01.565 ============================== 00:10:01.565 Admin Commands 00:10:01.565 -------------- 00:10:01.565 Delete I/O Submission Queue (00h): Supported 00:10:01.565 Create I/O Submission Queue (01h): Supported 00:10:01.565 Get Log Page (02h): Supported 00:10:01.565 Delete I/O Completion Queue (04h): Supported 00:10:01.565 Create I/O Completion Queue (05h): Supported 00:10:01.565 Identify (06h): Supported 00:10:01.565 Abort (08h): Supported 00:10:01.565 Set Features (09h): Supported 00:10:01.565 Get Features (0Ah): Supported 00:10:01.565 Asynchronous Event Request (0Ch): Supported 00:10:01.565 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.565 Directive Send (19h): Supported 00:10:01.565 Directive Receive (1Ah): Supported 00:10:01.565 Virtualization Management (1Ch): Supported 00:10:01.565 Doorbell Buffer Config (7Ch): Supported 00:10:01.565 Format NVM (80h): Supported LBA-Change 00:10:01.565 I/O Commands 00:10:01.565 ------------ 00:10:01.565 Flush (00h): Supported LBA-Change 00:10:01.565 Write (01h): Supported LBA-Change 00:10:01.565 Read (02h): Supported 00:10:01.565 Compare (05h): Supported 00:10:01.565 Write Zeroes (08h): Supported LBA-Change 00:10:01.565 Dataset Management (09h): Supported LBA-Change 00:10:01.565 Unknown (0Ch): Supported 00:10:01.565 Unknown (12h): Supported 00:10:01.565 Copy (19h): Supported LBA-Change 00:10:01.565 Unknown (1Dh): Supported LBA-Change 00:10:01.565 00:10:01.565 Error Log 00:10:01.565 ========= 00:10:01.565 00:10:01.565 Arbitration 00:10:01.565 =========== 00:10:01.565 Arbitration Burst: no limit 00:10:01.565 00:10:01.565 Power Management 00:10:01.565 ================ 00:10:01.565 Number of Power States: 1 00:10:01.565 Current Power State: Power State #0 00:10:01.565 Power State #0: 00:10:01.565 Max Power: 25.00 W 00:10:01.565 Non-Operational State: Operational 00:10:01.565 Entry Latency: 16 microseconds 00:10:01.565 Exit Latency: 4 microseconds 00:10:01.565 Relative Read Throughput: 0 00:10:01.565 Relative Read Latency: 0 00:10:01.565 Relative Write Throughput: 0 00:10:01.565 Relative Write Latency: 0 00:10:01.565 Idle Power: Not Reported 00:10:01.565 Active Power: Not Reported 00:10:01.565 Non-Operational Permissive Mode: Not Supported 00:10:01.565 00:10:01.565 Health Information 00:10:01.565 ================== 00:10:01.565 Critical Warnings: 00:10:01.565 Available Spare Space: OK 00:10:01.565 Temperature: OK 00:10:01.565 Device Reliability: OK 00:10:01.565 Read Only: No 00:10:01.565 Volatile Memory Backup: OK 00:10:01.565 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.565 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.565 Available Spare: 0% 00:10:01.565 Available Spare Threshold: 0% 00:10:01.565 Life Percentage Used: 0% 00:10:01.565 Data Units Read: 1757 00:10:01.565 Data Units Written: 804 00:10:01.565 Host Read Commands: 86811 00:10:01.565 Host Write Commands: 42995 00:10:01.565 Controller Busy Time: 0 minutes 00:10:01.565 Power Cycles: 0 00:10:01.565 Power On Hours: 0 hours 00:10:01.565 Unsafe Shutdowns: 0 00:10:01.565 Unrecoverable Media Errors: 0 00:10:01.565 Lifetime Error Log Entries: 0 00:10:01.565 Warning Temperature Time: 0 minutes 00:10:01.565 Critical Temperature Time: 0 minutes 00:10:01.565 00:10:01.565 Number of Queues 00:10:01.565 ================ 00:10:01.565 Number of I/O Submission Queues: 64 00:10:01.565 Number of I/O Completion Queues: 64 00:10:01.565 00:10:01.565 ZNS Specific Controller Data 00:10:01.565 ============================ 00:10:01.565 Zone Append Size Limit: 0 00:10:01.565 00:10:01.565 00:10:01.565 Active Namespaces 00:10:01.565 ================= 00:10:01.565 Namespace ID:1 00:10:01.565 Error Recovery Timeout: Unlimited 00:10:01.565 Command Set Identifier: NVM (00h) 00:10:01.565 Deallocate: Supported 00:10:01.565 Deallocated/Unwritten Error: Supported 00:10:01.565 Deallocated Read Value: All 0x00 00:10:01.565 Deallocate in Write Zeroes: Not Supported 00:10:01.565 Deallocated Guard Field: 0xFFFF 00:10:01.565 Flush: Supported 00:10:01.565 Reservation: Not Supported 00:10:01.565 Metadata Transferred as: Separate Metadata Buffer 00:10:01.565 Namespace Sharing Capabilities: Private 00:10:01.565 Size (in LBAs): 1548666 (5GiB) 00:10:01.565 Capacity (in LBAs): 1548666 (5GiB) 00:10:01.565 Utilization (in LBAs): 1548666 (5GiB) 00:10:01.565 Thin Provisioning: Not Supported 00:10:01.565 Per-NS Atomic Units: No 00:10:01.565 Maximum Single Source Range Length: 128 00:10:01.565 Maximum Copy Length: 128 00:10:01.565 Maximum Source Range Count: 128 00:10:01.566 NGUID/EUI64 Never Reused: No 00:10:01.566 Namespace Write Protected: No 00:10:01.566 Number of LBA Formats: 8 00:10:01.566 Current LBA Format: LBA Format #07 00:10:01.566 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.566 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.566 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.566 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.566 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.566 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.566 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.566 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.566 00:10:01.566 09:47:55 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:01.566 09:47:55 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:10:01.825 ===================================================== 00:10:01.825 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:01.825 ===================================================== 00:10:01.825 Controller Capabilities/Features 00:10:01.825 ================================ 00:10:01.825 Vendor ID: 1b36 00:10:01.825 Subsystem Vendor ID: 1af4 00:10:01.825 Serial Number: 12341 00:10:01.825 Model Number: QEMU NVMe Ctrl 00:10:01.825 Firmware Version: 8.0.0 00:10:01.825 Recommended Arb Burst: 6 00:10:01.825 IEEE OUI Identifier: 00 54 52 00:10:01.825 Multi-path I/O 00:10:01.825 May have multiple subsystem ports: No 00:10:01.825 May have multiple controllers: No 00:10:01.825 Associated with SR-IOV VF: No 00:10:01.825 Max Data Transfer Size: 524288 00:10:01.825 Max Number of Namespaces: 256 00:10:01.825 Max Number of I/O Queues: 64 00:10:01.825 NVMe Specification Version (VS): 1.4 00:10:01.825 NVMe Specification Version (Identify): 1.4 00:10:01.825 Maximum Queue Entries: 2048 00:10:01.825 Contiguous Queues Required: Yes 00:10:01.826 Arbitration Mechanisms Supported 00:10:01.826 Weighted Round Robin: Not Supported 00:10:01.826 Vendor Specific: Not Supported 00:10:01.826 Reset Timeout: 7500 ms 00:10:01.826 Doorbell Stride: 4 bytes 00:10:01.826 NVM Subsystem Reset: Not Supported 00:10:01.826 Command Sets Supported 00:10:01.826 NVM Command Set: Supported 00:10:01.826 Boot Partition: Not Supported 00:10:01.826 Memory Page Size Minimum: 4096 bytes 00:10:01.826 Memory Page Size Maximum: 65536 bytes 00:10:01.826 Persistent Memory Region: Not Supported 00:10:01.826 Optional Asynchronous Events Supported 00:10:01.826 Namespace Attribute Notices: Supported 00:10:01.826 Firmware Activation Notices: Not Supported 00:10:01.826 ANA Change Notices: Not Supported 00:10:01.826 PLE Aggregate Log Change Notices: Not Supported 00:10:01.826 LBA Status Info Alert Notices: Not Supported 00:10:01.826 EGE Aggregate Log Change Notices: Not Supported 00:10:01.826 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.826 Zone Descriptor Change Notices: Not Supported 00:10:01.826 Discovery Log Change Notices: Not Supported 00:10:01.826 Controller Attributes 00:10:01.826 128-bit Host Identifier: Not Supported 00:10:01.826 Non-Operational Permissive Mode: Not Supported 00:10:01.826 NVM Sets: Not Supported 00:10:01.826 Read Recovery Levels: Not Supported 00:10:01.826 Endurance Groups: Not Supported 00:10:01.826 Predictable Latency Mode: Not Supported 00:10:01.826 Traffic Based Keep ALive: Not Supported 00:10:01.826 Namespace Granularity: Not Supported 00:10:01.826 SQ Associations: Not Supported 00:10:01.826 UUID List: Not Supported 00:10:01.826 Multi-Domain Subsystem: Not Supported 00:10:01.826 Fixed Capacity Management: Not Supported 00:10:01.826 Variable Capacity Management: Not Supported 00:10:01.826 Delete Endurance Group: Not Supported 00:10:01.826 Delete NVM Set: Not Supported 00:10:01.826 Extended LBA Formats Supported: Supported 00:10:01.826 Flexible Data Placement Supported: Not Supported 00:10:01.826 00:10:01.826 Controller Memory Buffer Support 00:10:01.826 ================================ 00:10:01.826 Supported: No 00:10:01.826 00:10:01.826 Persistent Memory Region Support 00:10:01.826 ================================ 00:10:01.826 Supported: No 00:10:01.826 00:10:01.826 Admin Command Set Attributes 00:10:01.826 ============================ 00:10:01.826 Security Send/Receive: Not Supported 00:10:01.826 Format NVM: Supported 00:10:01.826 Firmware Activate/Download: Not Supported 00:10:01.826 Namespace Management: Supported 00:10:01.826 Device Self-Test: Not Supported 00:10:01.826 Directives: Supported 00:10:01.826 NVMe-MI: Not Supported 00:10:01.826 Virtualization Management: Not Supported 00:10:01.826 Doorbell Buffer Config: Supported 00:10:01.826 Get LBA Status Capability: Not Supported 00:10:01.826 Command & Feature Lockdown Capability: Not Supported 00:10:01.826 Abort Command Limit: 4 00:10:01.826 Async Event Request Limit: 4 00:10:01.826 Number of Firmware Slots: N/A 00:10:01.826 Firmware Slot 1 Read-Only: N/A 00:10:01.826 Firmware Activation Without Reset: N/A 00:10:01.826 Multiple Update Detection Support: N/A 00:10:01.826 Firmware Update Granularity: No Information Provided 00:10:01.826 Per-Namespace SMART Log: Yes 00:10:01.826 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.826 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:01.826 Command Effects Log Page: Supported 00:10:01.826 Get Log Page Extended Data: Supported 00:10:01.826 Telemetry Log Pages: Not Supported 00:10:01.826 Persistent Event Log Pages: Not Supported 00:10:01.826 Supported Log Pages Log Page: May Support 00:10:01.826 Commands Supported & Effects Log Page: Not Supported 00:10:01.826 Feature Identifiers & Effects Log Page:May Support 00:10:01.826 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.826 Data Area 4 for Telemetry Log: Not Supported 00:10:01.826 Error Log Page Entries Supported: 1 00:10:01.826 Keep Alive: Not Supported 00:10:01.826 00:10:01.826 NVM Command Set Attributes 00:10:01.826 ========================== 00:10:01.826 Submission Queue Entry Size 00:10:01.826 Max: 64 00:10:01.826 Min: 64 00:10:01.826 Completion Queue Entry Size 00:10:01.826 Max: 16 00:10:01.826 Min: 16 00:10:01.826 Number of Namespaces: 256 00:10:01.826 Compare Command: Supported 00:10:01.826 Write Uncorrectable Command: Not Supported 00:10:01.826 Dataset Management Command: Supported 00:10:01.826 Write Zeroes Command: Supported 00:10:01.826 Set Features Save Field: Supported 00:10:01.826 Reservations: Not Supported 00:10:01.826 Timestamp: Supported 00:10:01.826 Copy: Supported 00:10:01.826 Volatile Write Cache: Present 00:10:01.826 Atomic Write Unit (Normal): 1 00:10:01.826 Atomic Write Unit (PFail): 1 00:10:01.826 Atomic Compare & Write Unit: 1 00:10:01.826 Fused Compare & Write: Not Supported 00:10:01.826 Scatter-Gather List 00:10:01.826 SGL Command Set: Supported 00:10:01.826 SGL Keyed: Not Supported 00:10:01.826 SGL Bit Bucket Descriptor: Not Supported 00:10:01.826 SGL Metadata Pointer: Not Supported 00:10:01.826 Oversized SGL: Not Supported 00:10:01.826 SGL Metadata Address: Not Supported 00:10:01.826 SGL Offset: Not Supported 00:10:01.826 Transport SGL Data Block: Not Supported 00:10:01.826 Replay Protected Memory Block: Not Supported 00:10:01.826 00:10:01.826 Firmware Slot Information 00:10:01.826 ========================= 00:10:01.826 Active slot: 1 00:10:01.826 Slot 1 Firmware Revision: 1.0 00:10:01.826 00:10:01.826 00:10:01.826 Commands Supported and Effects 00:10:01.826 ============================== 00:10:01.826 Admin Commands 00:10:01.826 -------------- 00:10:01.826 Delete I/O Submission Queue (00h): Supported 00:10:01.826 Create I/O Submission Queue (01h): Supported 00:10:01.826 Get Log Page (02h): Supported 00:10:01.826 Delete I/O Completion Queue (04h): Supported 00:10:01.826 Create I/O Completion Queue (05h): Supported 00:10:01.826 Identify (06h): Supported 00:10:01.826 Abort (08h): Supported 00:10:01.826 Set Features (09h): Supported 00:10:01.826 Get Features (0Ah): Supported 00:10:01.826 Asynchronous Event Request (0Ch): Supported 00:10:01.826 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.826 Directive Send (19h): Supported 00:10:01.826 Directive Receive (1Ah): Supported 00:10:01.826 Virtualization Management (1Ch): Supported 00:10:01.826 Doorbell Buffer Config (7Ch): Supported 00:10:01.826 Format NVM (80h): Supported LBA-Change 00:10:01.826 I/O Commands 00:10:01.826 ------------ 00:10:01.826 Flush (00h): Supported LBA-Change 00:10:01.826 Write (01h): Supported LBA-Change 00:10:01.826 Read (02h): Supported 00:10:01.826 Compare (05h): Supported 00:10:01.826 Write Zeroes (08h): Supported LBA-Change 00:10:01.826 Dataset Management (09h): Supported LBA-Change 00:10:01.826 Unknown (0Ch): Supported 00:10:01.826 Unknown (12h): Supported 00:10:01.826 Copy (19h): Supported LBA-Change 00:10:01.826 Unknown (1Dh): Supported LBA-Change 00:10:01.826 00:10:01.826 Error Log 00:10:01.826 ========= 00:10:01.826 00:10:01.826 Arbitration 00:10:01.826 =========== 00:10:01.826 Arbitration Burst: no limit 00:10:01.826 00:10:01.826 Power Management 00:10:01.826 ================ 00:10:01.826 Number of Power States: 1 00:10:01.826 Current Power State: Power State #0 00:10:01.826 Power State #0: 00:10:01.826 Max Power: 25.00 W 00:10:01.826 Non-Operational State: Operational 00:10:01.826 Entry Latency: 16 microseconds 00:10:01.826 Exit Latency: 4 microseconds 00:10:01.826 Relative Read Throughput: 0 00:10:01.826 Relative Read Latency: 0 00:10:01.826 Relative Write Throughput: 0 00:10:01.826 Relative Write Latency: 0 00:10:01.826 Idle Power: Not Reported 00:10:01.826 Active Power: Not Reported 00:10:01.826 Non-Operational Permissive Mode: Not Supported 00:10:01.826 00:10:01.826 Health Information 00:10:01.826 ================== 00:10:01.826 Critical Warnings: 00:10:01.826 Available Spare Space: OK 00:10:01.826 Temperature: OK 00:10:01.826 Device Reliability: OK 00:10:01.826 Read Only: No 00:10:01.826 Volatile Memory Backup: OK 00:10:01.826 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.826 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.826 Available Spare: 0% 00:10:01.826 Available Spare Threshold: 0% 00:10:01.826 Life Percentage Used: 0% 00:10:01.826 Data Units Read: 1198 00:10:01.826 Data Units Written: 556 00:10:01.826 Host Read Commands: 59719 00:10:01.826 Host Write Commands: 29381 00:10:01.826 Controller Busy Time: 0 minutes 00:10:01.826 Power Cycles: 0 00:10:01.826 Power On Hours: 0 hours 00:10:01.826 Unsafe Shutdowns: 0 00:10:01.826 Unrecoverable Media Errors: 0 00:10:01.827 Lifetime Error Log Entries: 0 00:10:01.827 Warning Temperature Time: 0 minutes 00:10:01.827 Critical Temperature Time: 0 minutes 00:10:01.827 00:10:01.827 Number of Queues 00:10:01.827 ================ 00:10:01.827 Number of I/O Submission Queues: 64 00:10:01.827 Number of I/O Completion Queues: 64 00:10:01.827 00:10:01.827 ZNS Specific Controller Data 00:10:01.827 ============================ 00:10:01.827 Zone Append Size Limit: 0 00:10:01.827 00:10:01.827 00:10:01.827 Active Namespaces 00:10:01.827 ================= 00:10:01.827 Namespace ID:1 00:10:01.827 Error Recovery Timeout: Unlimited 00:10:01.827 Command Set Identifier: NVM (00h) 00:10:01.827 Deallocate: Supported 00:10:01.827 Deallocated/Unwritten Error: Supported 00:10:01.827 Deallocated Read Value: All 0x00 00:10:01.827 Deallocate in Write Zeroes: Not Supported 00:10:01.827 Deallocated Guard Field: 0xFFFF 00:10:01.827 Flush: Supported 00:10:01.827 Reservation: Not Supported 00:10:01.827 Namespace Sharing Capabilities: Private 00:10:01.827 Size (in LBAs): 1310720 (5GiB) 00:10:01.827 Capacity (in LBAs): 1310720 (5GiB) 00:10:01.827 Utilization (in LBAs): 1310720 (5GiB) 00:10:01.827 Thin Provisioning: Not Supported 00:10:01.827 Per-NS Atomic Units: No 00:10:01.827 Maximum Single Source Range Length: 128 00:10:01.827 Maximum Copy Length: 128 00:10:01.827 Maximum Source Range Count: 128 00:10:01.827 NGUID/EUI64 Never Reused: No 00:10:01.827 Namespace Write Protected: No 00:10:01.827 Number of LBA Formats: 8 00:10:01.827 Current LBA Format: LBA Format #04 00:10:01.827 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.827 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.827 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.827 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.827 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.827 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.827 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.827 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.827 00:10:01.827 09:47:55 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:01.827 09:47:55 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:10:02.086 ===================================================== 00:10:02.087 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:02.087 ===================================================== 00:10:02.087 Controller Capabilities/Features 00:10:02.087 ================================ 00:10:02.087 Vendor ID: 1b36 00:10:02.087 Subsystem Vendor ID: 1af4 00:10:02.087 Serial Number: 12342 00:10:02.087 Model Number: QEMU NVMe Ctrl 00:10:02.087 Firmware Version: 8.0.0 00:10:02.087 Recommended Arb Burst: 6 00:10:02.087 IEEE OUI Identifier: 00 54 52 00:10:02.087 Multi-path I/O 00:10:02.087 May have multiple subsystem ports: No 00:10:02.087 May have multiple controllers: No 00:10:02.087 Associated with SR-IOV VF: No 00:10:02.087 Max Data Transfer Size: 524288 00:10:02.087 Max Number of Namespaces: 256 00:10:02.087 Max Number of I/O Queues: 64 00:10:02.087 NVMe Specification Version (VS): 1.4 00:10:02.087 NVMe Specification Version (Identify): 1.4 00:10:02.087 Maximum Queue Entries: 2048 00:10:02.087 Contiguous Queues Required: Yes 00:10:02.087 Arbitration Mechanisms Supported 00:10:02.087 Weighted Round Robin: Not Supported 00:10:02.087 Vendor Specific: Not Supported 00:10:02.087 Reset Timeout: 7500 ms 00:10:02.087 Doorbell Stride: 4 bytes 00:10:02.087 NVM Subsystem Reset: Not Supported 00:10:02.087 Command Sets Supported 00:10:02.087 NVM Command Set: Supported 00:10:02.087 Boot Partition: Not Supported 00:10:02.087 Memory Page Size Minimum: 4096 bytes 00:10:02.087 Memory Page Size Maximum: 65536 bytes 00:10:02.087 Persistent Memory Region: Not Supported 00:10:02.087 Optional Asynchronous Events Supported 00:10:02.087 Namespace Attribute Notices: Supported 00:10:02.087 Firmware Activation Notices: Not Supported 00:10:02.087 ANA Change Notices: Not Supported 00:10:02.087 PLE Aggregate Log Change Notices: Not Supported 00:10:02.087 LBA Status Info Alert Notices: Not Supported 00:10:02.087 EGE Aggregate Log Change Notices: Not Supported 00:10:02.087 Normal NVM Subsystem Shutdown event: Not Supported 00:10:02.087 Zone Descriptor Change Notices: Not Supported 00:10:02.087 Discovery Log Change Notices: Not Supported 00:10:02.087 Controller Attributes 00:10:02.087 128-bit Host Identifier: Not Supported 00:10:02.087 Non-Operational Permissive Mode: Not Supported 00:10:02.087 NVM Sets: Not Supported 00:10:02.087 Read Recovery Levels: Not Supported 00:10:02.087 Endurance Groups: Not Supported 00:10:02.087 Predictable Latency Mode: Not Supported 00:10:02.087 Traffic Based Keep ALive: Not Supported 00:10:02.087 Namespace Granularity: Not Supported 00:10:02.087 SQ Associations: Not Supported 00:10:02.087 UUID List: Not Supported 00:10:02.087 Multi-Domain Subsystem: Not Supported 00:10:02.087 Fixed Capacity Management: Not Supported 00:10:02.087 Variable Capacity Management: Not Supported 00:10:02.087 Delete Endurance Group: Not Supported 00:10:02.087 Delete NVM Set: Not Supported 00:10:02.087 Extended LBA Formats Supported: Supported 00:10:02.087 Flexible Data Placement Supported: Not Supported 00:10:02.087 00:10:02.087 Controller Memory Buffer Support 00:10:02.087 ================================ 00:10:02.087 Supported: No 00:10:02.087 00:10:02.087 Persistent Memory Region Support 00:10:02.087 ================================ 00:10:02.087 Supported: No 00:10:02.087 00:10:02.087 Admin Command Set Attributes 00:10:02.087 ============================ 00:10:02.087 Security Send/Receive: Not Supported 00:10:02.087 Format NVM: Supported 00:10:02.087 Firmware Activate/Download: Not Supported 00:10:02.087 Namespace Management: Supported 00:10:02.087 Device Self-Test: Not Supported 00:10:02.087 Directives: Supported 00:10:02.087 NVMe-MI: Not Supported 00:10:02.087 Virtualization Management: Not Supported 00:10:02.087 Doorbell Buffer Config: Supported 00:10:02.087 Get LBA Status Capability: Not Supported 00:10:02.087 Command & Feature Lockdown Capability: Not Supported 00:10:02.087 Abort Command Limit: 4 00:10:02.087 Async Event Request Limit: 4 00:10:02.087 Number of Firmware Slots: N/A 00:10:02.087 Firmware Slot 1 Read-Only: N/A 00:10:02.087 Firmware Activation Without Reset: N/A 00:10:02.087 Multiple Update Detection Support: N/A 00:10:02.087 Firmware Update Granularity: No Information Provided 00:10:02.087 Per-Namespace SMART Log: Yes 00:10:02.087 Asymmetric Namespace Access Log Page: Not Supported 00:10:02.087 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:02.087 Command Effects Log Page: Supported 00:10:02.087 Get Log Page Extended Data: Supported 00:10:02.087 Telemetry Log Pages: Not Supported 00:10:02.087 Persistent Event Log Pages: Not Supported 00:10:02.087 Supported Log Pages Log Page: May Support 00:10:02.087 Commands Supported & Effects Log Page: Not Supported 00:10:02.087 Feature Identifiers & Effects Log Page:May Support 00:10:02.087 NVMe-MI Commands & Effects Log Page: May Support 00:10:02.087 Data Area 4 for Telemetry Log: Not Supported 00:10:02.087 Error Log Page Entries Supported: 1 00:10:02.087 Keep Alive: Not Supported 00:10:02.087 00:10:02.087 NVM Command Set Attributes 00:10:02.087 ========================== 00:10:02.087 Submission Queue Entry Size 00:10:02.087 Max: 64 00:10:02.087 Min: 64 00:10:02.087 Completion Queue Entry Size 00:10:02.087 Max: 16 00:10:02.087 Min: 16 00:10:02.087 Number of Namespaces: 256 00:10:02.087 Compare Command: Supported 00:10:02.087 Write Uncorrectable Command: Not Supported 00:10:02.087 Dataset Management Command: Supported 00:10:02.087 Write Zeroes Command: Supported 00:10:02.087 Set Features Save Field: Supported 00:10:02.087 Reservations: Not Supported 00:10:02.087 Timestamp: Supported 00:10:02.087 Copy: Supported 00:10:02.087 Volatile Write Cache: Present 00:10:02.087 Atomic Write Unit (Normal): 1 00:10:02.087 Atomic Write Unit (PFail): 1 00:10:02.087 Atomic Compare & Write Unit: 1 00:10:02.087 Fused Compare & Write: Not Supported 00:10:02.087 Scatter-Gather List 00:10:02.087 SGL Command Set: Supported 00:10:02.087 SGL Keyed: Not Supported 00:10:02.087 SGL Bit Bucket Descriptor: Not Supported 00:10:02.087 SGL Metadata Pointer: Not Supported 00:10:02.087 Oversized SGL: Not Supported 00:10:02.087 SGL Metadata Address: Not Supported 00:10:02.087 SGL Offset: Not Supported 00:10:02.087 Transport SGL Data Block: Not Supported 00:10:02.087 Replay Protected Memory Block: Not Supported 00:10:02.087 00:10:02.087 Firmware Slot Information 00:10:02.087 ========================= 00:10:02.087 Active slot: 1 00:10:02.087 Slot 1 Firmware Revision: 1.0 00:10:02.087 00:10:02.087 00:10:02.087 Commands Supported and Effects 00:10:02.087 ============================== 00:10:02.087 Admin Commands 00:10:02.087 -------------- 00:10:02.087 Delete I/O Submission Queue (00h): Supported 00:10:02.087 Create I/O Submission Queue (01h): Supported 00:10:02.087 Get Log Page (02h): Supported 00:10:02.087 Delete I/O Completion Queue (04h): Supported 00:10:02.087 Create I/O Completion Queue (05h): Supported 00:10:02.087 Identify (06h): Supported 00:10:02.087 Abort (08h): Supported 00:10:02.087 Set Features (09h): Supported 00:10:02.087 Get Features (0Ah): Supported 00:10:02.087 Asynchronous Event Request (0Ch): Supported 00:10:02.087 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:02.087 Directive Send (19h): Supported 00:10:02.087 Directive Receive (1Ah): Supported 00:10:02.087 Virtualization Management (1Ch): Supported 00:10:02.087 Doorbell Buffer Config (7Ch): Supported 00:10:02.087 Format NVM (80h): Supported LBA-Change 00:10:02.087 I/O Commands 00:10:02.087 ------------ 00:10:02.087 Flush (00h): Supported LBA-Change 00:10:02.087 Write (01h): Supported LBA-Change 00:10:02.087 Read (02h): Supported 00:10:02.087 Compare (05h): Supported 00:10:02.088 Write Zeroes (08h): Supported LBA-Change 00:10:02.088 Dataset Management (09h): Supported LBA-Change 00:10:02.088 Unknown (0Ch): Supported 00:10:02.088 Unknown (12h): Supported 00:10:02.088 Copy (19h): Supported LBA-Change 00:10:02.088 Unknown (1Dh): Supported LBA-Change 00:10:02.088 00:10:02.088 Error Log 00:10:02.088 ========= 00:10:02.088 00:10:02.088 Arbitration 00:10:02.088 =========== 00:10:02.088 Arbitration Burst: no limit 00:10:02.088 00:10:02.088 Power Management 00:10:02.088 ================ 00:10:02.088 Number of Power States: 1 00:10:02.088 Current Power State: Power State #0 00:10:02.088 Power State #0: 00:10:02.088 Max Power: 25.00 W 00:10:02.088 Non-Operational State: Operational 00:10:02.088 Entry Latency: 16 microseconds 00:10:02.088 Exit Latency: 4 microseconds 00:10:02.088 Relative Read Throughput: 0 00:10:02.088 Relative Read Latency: 0 00:10:02.088 Relative Write Throughput: 0 00:10:02.088 Relative Write Latency: 0 00:10:02.088 Idle Power: Not Reported 00:10:02.088 Active Power: Not Reported 00:10:02.088 Non-Operational Permissive Mode: Not Supported 00:10:02.088 00:10:02.088 Health Information 00:10:02.088 ================== 00:10:02.088 Critical Warnings: 00:10:02.088 Available Spare Space: OK 00:10:02.088 Temperature: OK 00:10:02.088 Device Reliability: OK 00:10:02.088 Read Only: No 00:10:02.088 Volatile Memory Backup: OK 00:10:02.088 Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.088 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:02.088 Available Spare: 0% 00:10:02.088 Available Spare Threshold: 0% 00:10:02.088 Life Percentage Used: 0% 00:10:02.088 Data Units Read: 3683 00:10:02.088 Data Units Written: 1703 00:10:02.088 Host Read Commands: 180316 00:10:02.088 Host Write Commands: 88612 00:10:02.088 Controller Busy Time: 0 minutes 00:10:02.088 Power Cycles: 0 00:10:02.088 Power On Hours: 0 hours 00:10:02.088 Unsafe Shutdowns: 0 00:10:02.088 Unrecoverable Media Errors: 0 00:10:02.088 Lifetime Error Log Entries: 0 00:10:02.088 Warning Temperature Time: 0 minutes 00:10:02.088 Critical Temperature Time: 0 minutes 00:10:02.088 00:10:02.088 Number of Queues 00:10:02.088 ================ 00:10:02.088 Number of I/O Submission Queues: 64 00:10:02.088 Number of I/O Completion Queues: 64 00:10:02.088 00:10:02.088 ZNS Specific Controller Data 00:10:02.088 ============================ 00:10:02.088 Zone Append Size Limit: 0 00:10:02.088 00:10:02.088 00:10:02.088 Active Namespaces 00:10:02.088 ================= 00:10:02.088 Namespace ID:1 00:10:02.088 Error Recovery Timeout: Unlimited 00:10:02.088 Command Set Identifier: NVM (00h) 00:10:02.088 Deallocate: Supported 00:10:02.088 Deallocated/Unwritten Error: Supported 00:10:02.088 Deallocated Read Value: All 0x00 00:10:02.088 Deallocate in Write Zeroes: Not Supported 00:10:02.088 Deallocated Guard Field: 0xFFFF 00:10:02.088 Flush: Supported 00:10:02.088 Reservation: Not Supported 00:10:02.088 Namespace Sharing Capabilities: Private 00:10:02.088 Size (in LBAs): 1048576 (4GiB) 00:10:02.088 Capacity (in LBAs): 1048576 (4GiB) 00:10:02.088 Utilization (in LBAs): 1048576 (4GiB) 00:10:02.088 Thin Provisioning: Not Supported 00:10:02.088 Per-NS Atomic Units: No 00:10:02.088 Maximum Single Source Range Length: 128 00:10:02.088 Maximum Copy Length: 128 00:10:02.088 Maximum Source Range Count: 128 00:10:02.088 NGUID/EUI64 Never Reused: No 00:10:02.088 Namespace Write Protected: No 00:10:02.088 Number of LBA Formats: 8 00:10:02.088 Current LBA Format: LBA Format #04 00:10:02.088 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:02.088 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:02.088 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:02.088 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:02.088 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:02.088 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:02.088 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:02.088 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:02.088 00:10:02.088 Namespace ID:2 00:10:02.088 Error Recovery Timeout: Unlimited 00:10:02.088 Command Set Identifier: NVM (00h) 00:10:02.088 Deallocate: Supported 00:10:02.088 Deallocated/Unwritten Error: Supported 00:10:02.088 Deallocated Read Value: All 0x00 00:10:02.088 Deallocate in Write Zeroes: Not Supported 00:10:02.088 Deallocated Guard Field: 0xFFFF 00:10:02.088 Flush: Supported 00:10:02.088 Reservation: Not Supported 00:10:02.088 Namespace Sharing Capabilities: Private 00:10:02.088 Size (in LBAs): 1048576 (4GiB) 00:10:02.088 Capacity (in LBAs): 1048576 (4GiB) 00:10:02.088 Utilization (in LBAs): 1048576 (4GiB) 00:10:02.088 Thin Provisioning: Not Supported 00:10:02.088 Per-NS Atomic Units: No 00:10:02.088 Maximum Single Source Range Length: 128 00:10:02.088 Maximum Copy Length: 128 00:10:02.088 Maximum Source Range Count: 128 00:10:02.088 NGUID/EUI64 Never Reused: No 00:10:02.088 Namespace Write Protected: No 00:10:02.088 Number of LBA Formats: 8 00:10:02.088 Current LBA Format: LBA Format #04 00:10:02.088 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:02.088 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:02.088 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:02.088 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:02.088 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:02.088 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:02.088 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:02.088 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:02.088 00:10:02.088 Namespace ID:3 00:10:02.088 Error Recovery Timeout: Unlimited 00:10:02.088 Command Set Identifier: NVM (00h) 00:10:02.088 Deallocate: Supported 00:10:02.088 Deallocated/Unwritten Error: Supported 00:10:02.088 Deallocated Read Value: All 0x00 00:10:02.088 Deallocate in Write Zeroes: Not Supported 00:10:02.088 Deallocated Guard Field: 0xFFFF 00:10:02.088 Flush: Supported 00:10:02.088 Reservation: Not Supported 00:10:02.088 Namespace Sharing Capabilities: Private 00:10:02.088 Size (in LBAs): 1048576 (4GiB) 00:10:02.088 Capacity (in LBAs): 1048576 (4GiB) 00:10:02.088 Utilization (in LBAs): 1048576 (4GiB) 00:10:02.088 Thin Provisioning: Not Supported 00:10:02.088 Per-NS Atomic Units: No 00:10:02.088 Maximum Single Source Range Length: 128 00:10:02.088 Maximum Copy Length: 128 00:10:02.088 Maximum Source Range Count: 128 00:10:02.088 NGUID/EUI64 Never Reused: No 00:10:02.088 Namespace Write Protected: No 00:10:02.088 Number of LBA Formats: 8 00:10:02.088 Current LBA Format: LBA Format #04 00:10:02.088 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:02.088 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:02.088 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:02.088 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:02.088 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:02.088 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:02.088 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:02.088 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:02.088 00:10:02.348 09:47:55 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:02.348 09:47:55 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:10:02.348 ===================================================== 00:10:02.348 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:02.348 ===================================================== 00:10:02.348 Controller Capabilities/Features 00:10:02.348 ================================ 00:10:02.348 Vendor ID: 1b36 00:10:02.348 Subsystem Vendor ID: 1af4 00:10:02.348 Serial Number: 12343 00:10:02.348 Model Number: QEMU NVMe Ctrl 00:10:02.348 Firmware Version: 8.0.0 00:10:02.348 Recommended Arb Burst: 6 00:10:02.348 IEEE OUI Identifier: 00 54 52 00:10:02.348 Multi-path I/O 00:10:02.348 May have multiple subsystem ports: No 00:10:02.348 May have multiple controllers: Yes 00:10:02.348 Associated with SR-IOV VF: No 00:10:02.348 Max Data Transfer Size: 524288 00:10:02.348 Max Number of Namespaces: 256 00:10:02.348 Max Number of I/O Queues: 64 00:10:02.348 NVMe Specification Version (VS): 1.4 00:10:02.348 NVMe Specification Version (Identify): 1.4 00:10:02.348 Maximum Queue Entries: 2048 00:10:02.348 Contiguous Queues Required: Yes 00:10:02.348 Arbitration Mechanisms Supported 00:10:02.348 Weighted Round Robin: Not Supported 00:10:02.348 Vendor Specific: Not Supported 00:10:02.348 Reset Timeout: 7500 ms 00:10:02.348 Doorbell Stride: 4 bytes 00:10:02.348 NVM Subsystem Reset: Not Supported 00:10:02.348 Command Sets Supported 00:10:02.348 NVM Command Set: Supported 00:10:02.348 Boot Partition: Not Supported 00:10:02.348 Memory Page Size Minimum: 4096 bytes 00:10:02.348 Memory Page Size Maximum: 65536 bytes 00:10:02.348 Persistent Memory Region: Not Supported 00:10:02.348 Optional Asynchronous Events Supported 00:10:02.348 Namespace Attribute Notices: Supported 00:10:02.348 Firmware Activation Notices: Not Supported 00:10:02.348 ANA Change Notices: Not Supported 00:10:02.348 PLE Aggregate Log Change Notices: Not Supported 00:10:02.348 LBA Status Info Alert Notices: Not Supported 00:10:02.348 EGE Aggregate Log Change Notices: Not Supported 00:10:02.348 Normal NVM Subsystem Shutdown event: Not Supported 00:10:02.348 Zone Descriptor Change Notices: Not Supported 00:10:02.348 Discovery Log Change Notices: Not Supported 00:10:02.348 Controller Attributes 00:10:02.348 128-bit Host Identifier: Not Supported 00:10:02.348 Non-Operational Permissive Mode: Not Supported 00:10:02.348 NVM Sets: Not Supported 00:10:02.348 Read Recovery Levels: Not Supported 00:10:02.348 Endurance Groups: Supported 00:10:02.348 Predictable Latency Mode: Not Supported 00:10:02.348 Traffic Based Keep ALive: Not Supported 00:10:02.348 Namespace Granularity: Not Supported 00:10:02.348 SQ Associations: Not Supported 00:10:02.349 UUID List: Not Supported 00:10:02.349 Multi-Domain Subsystem: Not Supported 00:10:02.349 Fixed Capacity Management: Not Supported 00:10:02.349 Variable Capacity Management: Not Supported 00:10:02.349 Delete Endurance Group: Not Supported 00:10:02.349 Delete NVM Set: Not Supported 00:10:02.349 Extended LBA Formats Supported: Supported 00:10:02.349 Flexible Data Placement Supported: Supported 00:10:02.349 00:10:02.349 Controller Memory Buffer Support 00:10:02.349 ================================ 00:10:02.349 Supported: No 00:10:02.349 00:10:02.349 Persistent Memory Region Support 00:10:02.349 ================================ 00:10:02.349 Supported: No 00:10:02.349 00:10:02.349 Admin Command Set Attributes 00:10:02.349 ============================ 00:10:02.349 Security Send/Receive: Not Supported 00:10:02.349 Format NVM: Supported 00:10:02.349 Firmware Activate/Download: Not Supported 00:10:02.349 Namespace Management: Supported 00:10:02.349 Device Self-Test: Not Supported 00:10:02.349 Directives: Supported 00:10:02.349 NVMe-MI: Not Supported 00:10:02.349 Virtualization Management: Not Supported 00:10:02.349 Doorbell Buffer Config: Supported 00:10:02.349 Get LBA Status Capability: Not Supported 00:10:02.349 Command & Feature Lockdown Capability: Not Supported 00:10:02.349 Abort Command Limit: 4 00:10:02.349 Async Event Request Limit: 4 00:10:02.349 Number of Firmware Slots: N/A 00:10:02.349 Firmware Slot 1 Read-Only: N/A 00:10:02.349 Firmware Activation Without Reset: N/A 00:10:02.349 Multiple Update Detection Support: N/A 00:10:02.349 Firmware Update Granularity: No Information Provided 00:10:02.349 Per-Namespace SMART Log: Yes 00:10:02.349 Asymmetric Namespace Access Log Page: Not Supported 00:10:02.349 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:02.349 Command Effects Log Page: Supported 00:10:02.349 Get Log Page Extended Data: Supported 00:10:02.349 Telemetry Log Pages: Not Supported 00:10:02.349 Persistent Event Log Pages: Not Supported 00:10:02.349 Supported Log Pages Log Page: May Support 00:10:02.349 Commands Supported & Effects Log Page: Not Supported 00:10:02.349 Feature Identifiers & Effects Log Page:May Support 00:10:02.349 NVMe-MI Commands & Effects Log Page: May Support 00:10:02.349 Data Area 4 for Telemetry Log: Not Supported 00:10:02.349 Error Log Page Entries Supported: 1 00:10:02.349 Keep Alive: Not Supported 00:10:02.349 00:10:02.349 NVM Command Set Attributes 00:10:02.349 ========================== 00:10:02.349 Submission Queue Entry Size 00:10:02.349 Max: 64 00:10:02.349 Min: 64 00:10:02.349 Completion Queue Entry Size 00:10:02.349 Max: 16 00:10:02.349 Min: 16 00:10:02.349 Number of Namespaces: 256 00:10:02.349 Compare Command: Supported 00:10:02.349 Write Uncorrectable Command: Not Supported 00:10:02.349 Dataset Management Command: Supported 00:10:02.349 Write Zeroes Command: Supported 00:10:02.349 Set Features Save Field: Supported 00:10:02.349 Reservations: Not Supported 00:10:02.349 Timestamp: Supported 00:10:02.349 Copy: Supported 00:10:02.349 Volatile Write Cache: Present 00:10:02.349 Atomic Write Unit (Normal): 1 00:10:02.349 Atomic Write Unit (PFail): 1 00:10:02.349 Atomic Compare & Write Unit: 1 00:10:02.349 Fused Compare & Write: Not Supported 00:10:02.349 Scatter-Gather List 00:10:02.349 SGL Command Set: Supported 00:10:02.349 SGL Keyed: Not Supported 00:10:02.349 SGL Bit Bucket Descriptor: Not Supported 00:10:02.349 SGL Metadata Pointer: Not Supported 00:10:02.349 Oversized SGL: Not Supported 00:10:02.349 SGL Metadata Address: Not Supported 00:10:02.349 SGL Offset: Not Supported 00:10:02.349 Transport SGL Data Block: Not Supported 00:10:02.349 Replay Protected Memory Block: Not Supported 00:10:02.349 00:10:02.349 Firmware Slot Information 00:10:02.349 ========================= 00:10:02.349 Active slot: 1 00:10:02.349 Slot 1 Firmware Revision: 1.0 00:10:02.349 00:10:02.349 00:10:02.349 Commands Supported and Effects 00:10:02.349 ============================== 00:10:02.349 Admin Commands 00:10:02.349 -------------- 00:10:02.349 Delete I/O Submission Queue (00h): Supported 00:10:02.349 Create I/O Submission Queue (01h): Supported 00:10:02.349 Get Log Page (02h): Supported 00:10:02.349 Delete I/O Completion Queue (04h): Supported 00:10:02.349 Create I/O Completion Queue (05h): Supported 00:10:02.349 Identify (06h): Supported 00:10:02.349 Abort (08h): Supported 00:10:02.349 Set Features (09h): Supported 00:10:02.349 Get Features (0Ah): Supported 00:10:02.349 Asynchronous Event Request (0Ch): Supported 00:10:02.349 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:02.349 Directive Send (19h): Supported 00:10:02.349 Directive Receive (1Ah): Supported 00:10:02.349 Virtualization Management (1Ch): Supported 00:10:02.349 Doorbell Buffer Config (7Ch): Supported 00:10:02.349 Format NVM (80h): Supported LBA-Change 00:10:02.349 I/O Commands 00:10:02.349 ------------ 00:10:02.349 Flush (00h): Supported LBA-Change 00:10:02.349 Write (01h): Supported LBA-Change 00:10:02.349 Read (02h): Supported 00:10:02.349 Compare (05h): Supported 00:10:02.349 Write Zeroes (08h): Supported LBA-Change 00:10:02.349 Dataset Management (09h): Supported LBA-Change 00:10:02.349 Unknown (0Ch): Supported 00:10:02.349 Unknown (12h): Supported 00:10:02.349 Copy (19h): Supported LBA-Change 00:10:02.349 Unknown (1Dh): Supported LBA-Change 00:10:02.349 00:10:02.349 Error Log 00:10:02.349 ========= 00:10:02.349 00:10:02.349 Arbitration 00:10:02.349 =========== 00:10:02.349 Arbitration Burst: no limit 00:10:02.349 00:10:02.349 Power Management 00:10:02.349 ================ 00:10:02.349 Number of Power States: 1 00:10:02.349 Current Power State: Power State #0 00:10:02.349 Power State #0: 00:10:02.349 Max Power: 25.00 W 00:10:02.349 Non-Operational State: Operational 00:10:02.349 Entry Latency: 16 microseconds 00:10:02.349 Exit Latency: 4 microseconds 00:10:02.349 Relative Read Throughput: 0 00:10:02.349 Relative Read Latency: 0 00:10:02.349 Relative Write Throughput: 0 00:10:02.349 Relative Write Latency: 0 00:10:02.349 Idle Power: Not Reported 00:10:02.349 Active Power: Not Reported 00:10:02.349 Non-Operational Permissive Mode: Not Supported 00:10:02.349 00:10:02.349 Health Information 00:10:02.349 ================== 00:10:02.349 Critical Warnings: 00:10:02.349 Available Spare Space: OK 00:10:02.349 Temperature: OK 00:10:02.349 Device Reliability: OK 00:10:02.349 Read Only: No 00:10:02.349 Volatile Memory Backup: OK 00:10:02.349 Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.349 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:02.349 Available Spare: 0% 00:10:02.349 Available Spare Threshold: 0% 00:10:02.349 Life Percentage Used: 0% 00:10:02.349 Data Units Read: 1249 00:10:02.349 Data Units Written: 594 00:10:02.349 Host Read Commands: 59718 00:10:02.349 Host Write Commands: 29763 00:10:02.349 Controller Busy Time: 0 minutes 00:10:02.349 Power Cycles: 0 00:10:02.349 Power On Hours: 0 hours 00:10:02.349 Unsafe Shutdowns: 0 00:10:02.349 Unrecoverable Media Errors: 0 00:10:02.349 Lifetime Error Log Entries: 0 00:10:02.349 Warning Temperature Time: 0 minutes 00:10:02.349 Critical Temperature Time: 0 minutes 00:10:02.349 00:10:02.349 Number of Queues 00:10:02.349 ================ 00:10:02.349 Number of I/O Submission Queues: 64 00:10:02.349 Number of I/O Completion Queues: 64 00:10:02.349 00:10:02.349 ZNS Specific Controller Data 00:10:02.349 ============================ 00:10:02.349 Zone Append Size Limit: 0 00:10:02.349 00:10:02.349 00:10:02.349 Active Namespaces 00:10:02.349 ================= 00:10:02.349 Namespace ID:1 00:10:02.349 Error Recovery Timeout: Unlimited 00:10:02.349 Command Set Identifier: NVM (00h) 00:10:02.349 Deallocate: Supported 00:10:02.349 Deallocated/Unwritten Error: Supported 00:10:02.349 Deallocated Read Value: All 0x00 00:10:02.349 Deallocate in Write Zeroes: Not Supported 00:10:02.349 Deallocated Guard Field: 0xFFFF 00:10:02.349 Flush: Supported 00:10:02.349 Reservation: Not Supported 00:10:02.349 Namespace Sharing Capabilities: Multiple Controllers 00:10:02.349 Size (in LBAs): 262144 (1GiB) 00:10:02.349 Capacity (in LBAs): 262144 (1GiB) 00:10:02.349 Utilization (in LBAs): 262144 (1GiB) 00:10:02.349 Thin Provisioning: Not Supported 00:10:02.349 Per-NS Atomic Units: No 00:10:02.349 Maximum Single Source Range Length: 128 00:10:02.349 Maximum Copy Length: 128 00:10:02.349 Maximum Source Range Count: 128 00:10:02.349 NGUID/EUI64 Never Reused: No 00:10:02.349 Namespace Write Protected: No 00:10:02.349 Endurance group ID: 1 00:10:02.350 Number of LBA Formats: 8 00:10:02.350 Current LBA Format: LBA Format #04 00:10:02.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:02.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:02.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:02.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:02.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:02.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:02.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:02.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:02.350 00:10:02.350 Get Feature FDP: 00:10:02.350 ================ 00:10:02.350 Enabled: Yes 00:10:02.350 FDP configuration index: 0 00:10:02.350 00:10:02.350 FDP configurations log page 00:10:02.350 =========================== 00:10:02.350 Number of FDP configurations: 1 00:10:02.350 Version: 0 00:10:02.350 Size: 112 00:10:02.350 FDP Configuration Descriptor: 0 00:10:02.350 Descriptor Size: 96 00:10:02.350 Reclaim Group Identifier format: 2 00:10:02.350 FDP Volatile Write Cache: Not Present 00:10:02.350 FDP Configuration: Valid 00:10:02.350 Vendor Specific Size: 0 00:10:02.350 Number of Reclaim Groups: 2 00:10:02.350 Number of Recalim Unit Handles: 8 00:10:02.350 Max Placement Identifiers: 128 00:10:02.350 Number of Namespaces Suppprted: 256 00:10:02.350 Reclaim unit Nominal Size: 6000000 bytes 00:10:02.350 Estimated Reclaim Unit Time Limit: Not Reported 00:10:02.350 RUH Desc #000: RUH Type: Initially Isolated 00:10:02.350 RUH Desc #001: RUH Type: Initially Isolated 00:10:02.350 RUH Desc #002: RUH Type: Initially Isolated 00:10:02.350 RUH Desc #003: RUH Type: Initially Isolated 00:10:02.350 RUH Desc #004: RUH Type: Initially Isolated 00:10:02.350 RUH Desc #005: RUH Type: Initially Isolated 00:10:02.350 RUH Desc #006: RUH Type: Initially Isolated 00:10:02.350 RUH Desc #007: RUH Type: Initially Isolated 00:10:02.350 00:10:02.350 FDP reclaim unit handle usage log page 00:10:02.609 ====================================== 00:10:02.609 Number of Reclaim Unit Handles: 8 00:10:02.609 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:02.609 RUH Usage Desc #001: RUH Attributes: Unused 00:10:02.609 RUH Usage Desc #002: RUH Attributes: Unused 00:10:02.609 RUH Usage Desc #003: RUH Attributes: Unused 00:10:02.609 RUH Usage Desc #004: RUH Attributes: Unused 00:10:02.609 RUH Usage Desc #005: RUH Attributes: Unused 00:10:02.609 RUH Usage Desc #006: RUH Attributes: Unused 00:10:02.609 RUH Usage Desc #007: RUH Attributes: Unused 00:10:02.609 00:10:02.609 FDP statistics log page 00:10:02.609 ======================= 00:10:02.609 Host bytes with metadata written: 387362816 00:10:02.609 Media bytes with metadata written: 387448832 00:10:02.609 Media bytes erased: 0 00:10:02.609 00:10:02.609 FDP events log page 00:10:02.609 =================== 00:10:02.609 Number of FDP events: 0 00:10:02.609 00:10:02.609 00:10:02.609 real 0m1.564s 00:10:02.609 user 0m0.639s 00:10:02.609 sys 0m0.719s 00:10:02.609 09:47:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.609 ************************************ 00:10:02.609 END TEST nvme_identify 00:10:02.609 ************************************ 00:10:02.609 09:47:56 -- common/autotest_common.sh@10 -- # set +x 00:10:02.609 09:47:56 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:02.609 09:47:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:02.609 09:47:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.609 09:47:56 -- common/autotest_common.sh@10 -- # set +x 00:10:02.609 ************************************ 00:10:02.609 START TEST nvme_perf 00:10:02.609 ************************************ 00:10:02.609 09:47:56 -- common/autotest_common.sh@1104 -- # nvme_perf 00:10:02.609 09:47:56 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:03.988 Initializing NVMe Controllers 00:10:03.988 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:03.988 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:03.988 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:03.988 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:03.988 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:03.988 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:03.988 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:03.988 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:03.989 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:03.989 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:03.989 Initialization complete. Launching workers. 00:10:03.989 ======================================================== 00:10:03.989 Latency(us) 00:10:03.989 Device Information : IOPS MiB/s Average min max 00:10:03.989 PCIE (0000:00:06.0) NSID 1 from core 0: 13920.21 163.13 9189.37 6679.66 35984.51 00:10:03.989 PCIE (0000:00:07.0) NSID 1 from core 0: 13920.21 163.13 9179.07 6981.27 34050.35 00:10:03.989 PCIE (0000:00:09.0) NSID 1 from core 0: 13920.21 163.13 9166.51 6979.59 33446.60 00:10:03.989 PCIE (0000:00:08.0) NSID 1 from core 0: 13920.21 163.13 9153.67 6999.95 31529.47 00:10:03.989 PCIE (0000:00:08.0) NSID 2 from core 0: 13920.21 163.13 9141.63 6871.46 29663.17 00:10:03.989 PCIE (0000:00:08.0) NSID 3 from core 0: 13920.21 163.13 9129.43 6826.65 27780.87 00:10:03.989 ======================================================== 00:10:03.989 Total : 83521.24 978.76 9159.95 6679.66 35984.51 00:10:03.989 00:10:03.989 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:03.989 ================================================================================= 00:10:03.989 1.00000% : 7089.804us 00:10:03.989 10.00000% : 7626.007us 00:10:03.989 25.00000% : 8221.789us 00:10:03.989 50.00000% : 8877.149us 00:10:03.989 75.00000% : 9651.665us 00:10:03.989 90.00000% : 10426.182us 00:10:03.989 95.00000% : 10902.807us 00:10:03.989 98.00000% : 12690.153us 00:10:03.989 99.00000% : 14358.342us 00:10:03.989 99.50000% : 33602.095us 00:10:03.989 99.90000% : 35508.596us 00:10:03.989 99.99000% : 35985.222us 00:10:03.989 99.99900% : 35985.222us 00:10:03.989 99.99990% : 35985.222us 00:10:03.989 99.99999% : 35985.222us 00:10:03.989 00:10:03.989 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:03.989 ================================================================================= 00:10:03.989 1.00000% : 7298.327us 00:10:03.989 10.00000% : 7745.164us 00:10:03.989 25.00000% : 8281.367us 00:10:03.989 50.00000% : 8877.149us 00:10:03.989 75.00000% : 9592.087us 00:10:03.989 90.00000% : 10307.025us 00:10:03.989 95.00000% : 10783.651us 00:10:03.989 98.00000% : 12749.731us 00:10:03.989 99.00000% : 14120.029us 00:10:03.989 99.50000% : 31695.593us 00:10:03.989 99.90000% : 33602.095us 00:10:03.989 99.99000% : 34078.720us 00:10:03.989 99.99900% : 34078.720us 00:10:03.989 99.99990% : 34078.720us 00:10:03.989 99.99999% : 34078.720us 00:10:03.989 00:10:03.989 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:03.989 ================================================================================= 00:10:03.989 1.00000% : 7268.538us 00:10:03.989 10.00000% : 7745.164us 00:10:03.989 25.00000% : 8281.367us 00:10:03.989 50.00000% : 8877.149us 00:10:03.989 75.00000% : 9651.665us 00:10:03.989 90.00000% : 10366.604us 00:10:03.989 95.00000% : 10843.229us 00:10:03.989 98.00000% : 11975.215us 00:10:03.989 99.00000% : 13762.560us 00:10:03.989 99.50000% : 30980.655us 00:10:03.989 99.90000% : 33125.469us 00:10:03.989 99.99000% : 33602.095us 00:10:03.989 99.99900% : 33602.095us 00:10:03.989 99.99990% : 33602.095us 00:10:03.989 99.99999% : 33602.095us 00:10:03.989 00:10:03.989 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:03.989 ================================================================================= 00:10:03.989 1.00000% : 7298.327us 00:10:03.989 10.00000% : 7745.164us 00:10:03.989 25.00000% : 8281.367us 00:10:03.989 50.00000% : 8877.149us 00:10:03.989 75.00000% : 9651.665us 00:10:03.989 90.00000% : 10366.604us 00:10:03.989 95.00000% : 10843.229us 00:10:03.989 98.00000% : 12153.949us 00:10:03.989 99.00000% : 13166.778us 00:10:03.989 99.50000% : 28954.996us 00:10:03.989 99.90000% : 31218.967us 00:10:03.989 99.99000% : 31695.593us 00:10:03.989 99.99900% : 31695.593us 00:10:03.989 99.99990% : 31695.593us 00:10:03.989 99.99999% : 31695.593us 00:10:03.989 00:10:03.989 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:03.989 ================================================================================= 00:10:03.989 1.00000% : 7298.327us 00:10:03.989 10.00000% : 7745.164us 00:10:03.989 25.00000% : 8340.945us 00:10:03.989 50.00000% : 8936.727us 00:10:03.989 75.00000% : 9651.665us 00:10:03.989 90.00000% : 10307.025us 00:10:03.989 95.00000% : 10783.651us 00:10:03.989 98.00000% : 12451.840us 00:10:03.989 99.00000% : 13881.716us 00:10:03.989 99.50000% : 27048.495us 00:10:03.989 99.90000% : 29193.309us 00:10:03.989 99.99000% : 29669.935us 00:10:03.989 99.99900% : 29669.935us 00:10:03.989 99.99990% : 29669.935us 00:10:03.989 99.99999% : 29669.935us 00:10:03.989 00:10:03.989 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:03.989 ================================================================================= 00:10:03.989 1.00000% : 7268.538us 00:10:03.989 10.00000% : 7745.164us 00:10:03.989 25.00000% : 8340.945us 00:10:03.989 50.00000% : 8936.727us 00:10:03.989 75.00000% : 9592.087us 00:10:03.989 90.00000% : 10307.025us 00:10:03.989 95.00000% : 10724.073us 00:10:03.989 98.00000% : 12749.731us 00:10:03.989 99.00000% : 14060.451us 00:10:03.989 99.50000% : 25261.149us 00:10:03.989 99.90000% : 27405.964us 00:10:03.989 99.99000% : 27763.433us 00:10:03.989 99.99900% : 27882.589us 00:10:03.989 99.99990% : 27882.589us 00:10:03.989 99.99999% : 27882.589us 00:10:03.989 00:10:03.989 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:03.989 ============================================================================== 00:10:03.989 Range in us Cumulative IO count 00:10:03.989 6672.756 - 6702.545: 0.0072% ( 1) 00:10:03.989 6702.545 - 6732.335: 0.0143% ( 1) 00:10:03.989 6732.335 - 6762.124: 0.0287% ( 2) 00:10:03.989 6762.124 - 6791.913: 0.0430% ( 2) 00:10:03.989 6791.913 - 6821.702: 0.0573% ( 2) 00:10:03.989 6821.702 - 6851.491: 0.0788% ( 3) 00:10:03.989 6851.491 - 6881.280: 0.1003% ( 3) 00:10:03.989 6881.280 - 6911.069: 0.1290% ( 4) 00:10:03.989 6911.069 - 6940.858: 0.1864% ( 8) 00:10:03.989 6940.858 - 6970.647: 0.2652% ( 11) 00:10:03.989 6970.647 - 7000.436: 0.3870% ( 17) 00:10:03.989 7000.436 - 7030.225: 0.5447% ( 22) 00:10:03.989 7030.225 - 7060.015: 0.7024% ( 22) 00:10:03.989 7060.015 - 7089.804: 1.0106% ( 43) 00:10:03.989 7089.804 - 7119.593: 1.3045% ( 41) 00:10:03.989 7119.593 - 7149.382: 1.6198% ( 44) 00:10:03.989 7149.382 - 7179.171: 1.9854% ( 51) 00:10:03.989 7179.171 - 7208.960: 2.3868% ( 56) 00:10:03.989 7208.960 - 7238.749: 2.8670% ( 67) 00:10:03.989 7238.749 - 7268.538: 3.3615% ( 69) 00:10:03.989 7268.538 - 7298.327: 3.9206% ( 78) 00:10:03.989 7298.327 - 7328.116: 4.4510% ( 74) 00:10:03.989 7328.116 - 7357.905: 5.0530% ( 84) 00:10:03.989 7357.905 - 7387.695: 5.6766% ( 87) 00:10:03.989 7387.695 - 7417.484: 6.2500% ( 80) 00:10:03.989 7417.484 - 7447.273: 6.8449% ( 83) 00:10:03.989 7447.273 - 7477.062: 7.5330% ( 96) 00:10:03.989 7477.062 - 7506.851: 8.0419% ( 71) 00:10:03.989 7506.851 - 7536.640: 8.6941% ( 91) 00:10:03.989 7536.640 - 7566.429: 9.3320% ( 89) 00:10:03.989 7566.429 - 7596.218: 9.9556% ( 87) 00:10:03.989 7596.218 - 7626.007: 10.5361% ( 81) 00:10:03.989 7626.007 - 7685.585: 11.7976% ( 176) 00:10:03.989 7685.585 - 7745.164: 13.0519% ( 175) 00:10:03.989 7745.164 - 7804.742: 14.3062% ( 175) 00:10:03.989 7804.742 - 7864.320: 15.5677% ( 176) 00:10:03.989 7864.320 - 7923.898: 17.0370% ( 205) 00:10:03.989 7923.898 - 7983.476: 18.4275% ( 194) 00:10:03.989 7983.476 - 8043.055: 19.9756% ( 216) 00:10:03.989 8043.055 - 8102.633: 21.6456% ( 233) 00:10:03.989 8102.633 - 8162.211: 23.3013% ( 231) 00:10:03.989 8162.211 - 8221.789: 25.3727% ( 289) 00:10:03.989 8221.789 - 8281.367: 27.4513% ( 290) 00:10:03.989 8281.367 - 8340.945: 29.6373% ( 305) 00:10:03.989 8340.945 - 8400.524: 31.8449% ( 308) 00:10:03.989 8400.524 - 8460.102: 34.1958% ( 328) 00:10:03.989 8460.102 - 8519.680: 36.4679% ( 317) 00:10:03.989 8519.680 - 8579.258: 38.7400% ( 317) 00:10:03.989 8579.258 - 8638.836: 41.0335% ( 320) 00:10:03.989 8638.836 - 8698.415: 43.3056% ( 317) 00:10:03.989 8698.415 - 8757.993: 45.6279% ( 324) 00:10:03.989 8757.993 - 8817.571: 48.0361% ( 336) 00:10:03.989 8817.571 - 8877.149: 50.2365% ( 307) 00:10:03.989 8877.149 - 8936.727: 52.5946% ( 329) 00:10:03.989 8936.727 - 8996.305: 54.9599% ( 330) 00:10:03.989 8996.305 - 9055.884: 57.1746% ( 309) 00:10:03.989 9055.884 - 9115.462: 59.5183% ( 327) 00:10:03.989 9115.462 - 9175.040: 61.7474% ( 311) 00:10:03.989 9175.040 - 9234.618: 63.8833% ( 298) 00:10:03.989 9234.618 - 9294.196: 65.7970% ( 267) 00:10:03.989 9294.196 - 9353.775: 67.7681% ( 275) 00:10:03.989 9353.775 - 9413.353: 69.6674% ( 265) 00:10:03.989 9413.353 - 9472.931: 71.3733% ( 238) 00:10:03.989 9472.931 - 9532.509: 72.9573% ( 221) 00:10:03.989 9532.509 - 9592.087: 74.5341% ( 220) 00:10:03.989 9592.087 - 9651.665: 75.8673% ( 186) 00:10:03.989 9651.665 - 9711.244: 77.0571% ( 166) 00:10:03.989 9711.244 - 9770.822: 78.3329% ( 178) 00:10:03.989 9770.822 - 9830.400: 79.4868% ( 161) 00:10:03.989 9830.400 - 9889.978: 80.6479% ( 162) 00:10:03.989 9889.978 - 9949.556: 81.8234% ( 164) 00:10:03.989 9949.556 - 10009.135: 82.9774% ( 161) 00:10:03.989 10009.135 - 10068.713: 84.1313% ( 161) 00:10:03.989 10068.713 - 10128.291: 85.2996% ( 163) 00:10:03.989 10128.291 - 10187.869: 86.3604% ( 148) 00:10:03.990 10187.869 - 10247.447: 87.4355% ( 150) 00:10:03.990 10247.447 - 10307.025: 88.4819% ( 146) 00:10:03.990 10307.025 - 10366.604: 89.4209% ( 131) 00:10:03.990 10366.604 - 10426.182: 90.3813% ( 134) 00:10:03.990 10426.182 - 10485.760: 91.1196% ( 103) 00:10:03.990 10485.760 - 10545.338: 91.8363% ( 100) 00:10:03.990 10545.338 - 10604.916: 92.5745% ( 103) 00:10:03.990 10604.916 - 10664.495: 93.2053% ( 88) 00:10:03.990 10664.495 - 10724.073: 93.8360% ( 88) 00:10:03.990 10724.073 - 10783.651: 94.3162% ( 67) 00:10:03.990 10783.651 - 10843.229: 94.7176% ( 56) 00:10:03.990 10843.229 - 10902.807: 95.0760% ( 50) 00:10:03.990 10902.807 - 10962.385: 95.4057% ( 46) 00:10:03.990 10962.385 - 11021.964: 95.6780% ( 38) 00:10:03.990 11021.964 - 11081.542: 95.8931% ( 30) 00:10:03.990 11081.542 - 11141.120: 96.1296% ( 33) 00:10:03.990 11141.120 - 11200.698: 96.3374% ( 29) 00:10:03.990 11200.698 - 11260.276: 96.5023% ( 23) 00:10:03.990 11260.276 - 11319.855: 96.6170% ( 16) 00:10:03.990 11319.855 - 11379.433: 96.7245% ( 15) 00:10:03.990 11379.433 - 11439.011: 96.8535% ( 18) 00:10:03.990 11439.011 - 11498.589: 96.9610% ( 15) 00:10:03.990 11498.589 - 11558.167: 97.0614% ( 14) 00:10:03.990 11558.167 - 11617.745: 97.1617% ( 14) 00:10:03.990 11617.745 - 11677.324: 97.2405% ( 11) 00:10:03.990 11677.324 - 11736.902: 97.3122% ( 10) 00:10:03.990 11736.902 - 11796.480: 97.3839% ( 10) 00:10:03.990 11796.480 - 11856.058: 97.4556% ( 10) 00:10:03.990 11856.058 - 11915.636: 97.5129% ( 8) 00:10:03.990 11915.636 - 11975.215: 97.5702% ( 8) 00:10:03.990 11975.215 - 12034.793: 97.6061% ( 5) 00:10:03.990 12034.793 - 12094.371: 97.6419% ( 5) 00:10:03.990 12094.371 - 12153.949: 97.6849% ( 6) 00:10:03.990 12153.949 - 12213.527: 97.7136% ( 4) 00:10:03.990 12213.527 - 12273.105: 97.7566% ( 6) 00:10:03.990 12273.105 - 12332.684: 97.7924% ( 5) 00:10:03.990 12332.684 - 12392.262: 97.8354% ( 6) 00:10:03.990 12392.262 - 12451.840: 97.8713% ( 5) 00:10:03.990 12451.840 - 12511.418: 97.8999% ( 4) 00:10:03.990 12511.418 - 12570.996: 97.9429% ( 6) 00:10:03.990 12570.996 - 12630.575: 97.9788% ( 5) 00:10:03.990 12630.575 - 12690.153: 98.0146% ( 5) 00:10:03.990 12690.153 - 12749.731: 98.0576% ( 6) 00:10:03.990 12749.731 - 12809.309: 98.0720% ( 2) 00:10:03.990 12809.309 - 12868.887: 98.1293% ( 8) 00:10:03.990 12868.887 - 12928.465: 98.1723% ( 6) 00:10:03.990 12928.465 - 12988.044: 98.1938% ( 3) 00:10:03.990 12988.044 - 13047.622: 98.2440% ( 7) 00:10:03.990 13047.622 - 13107.200: 98.2798% ( 5) 00:10:03.990 13107.200 - 13166.778: 98.3157% ( 5) 00:10:03.990 13166.778 - 13226.356: 98.3587% ( 6) 00:10:03.990 13226.356 - 13285.935: 98.3873% ( 4) 00:10:03.990 13285.935 - 13345.513: 98.4232% ( 5) 00:10:03.990 13345.513 - 13405.091: 98.4662% ( 6) 00:10:03.990 13405.091 - 13464.669: 98.5020% ( 5) 00:10:03.990 13464.669 - 13524.247: 98.5378% ( 5) 00:10:03.990 13524.247 - 13583.825: 98.5737% ( 5) 00:10:03.990 13583.825 - 13643.404: 98.6024% ( 4) 00:10:03.990 13643.404 - 13702.982: 98.6382% ( 5) 00:10:03.990 13702.982 - 13762.560: 98.6884% ( 7) 00:10:03.990 13762.560 - 13822.138: 98.7170% ( 4) 00:10:03.990 13822.138 - 13881.716: 98.7600% ( 6) 00:10:03.990 13881.716 - 13941.295: 98.7815% ( 3) 00:10:03.990 13941.295 - 14000.873: 98.8317% ( 7) 00:10:03.990 14000.873 - 14060.451: 98.8532% ( 3) 00:10:03.990 14060.451 - 14120.029: 98.9034% ( 7) 00:10:03.990 14120.029 - 14179.607: 98.9321% ( 4) 00:10:03.990 14179.607 - 14239.185: 98.9607% ( 4) 00:10:03.990 14239.185 - 14298.764: 98.9822% ( 3) 00:10:03.990 14298.764 - 14358.342: 99.0037% ( 3) 00:10:03.990 14358.342 - 14417.920: 99.0324% ( 4) 00:10:03.990 14417.920 - 14477.498: 99.0539% ( 3) 00:10:03.990 14477.498 - 14537.076: 99.0682% ( 2) 00:10:03.990 14537.076 - 14596.655: 99.0826% ( 2) 00:10:03.990 31218.967 - 31457.280: 99.1184% ( 5) 00:10:03.990 31457.280 - 31695.593: 99.1614% ( 6) 00:10:03.990 31695.593 - 31933.905: 99.2044% ( 6) 00:10:03.990 31933.905 - 32172.218: 99.2474% ( 6) 00:10:03.990 32172.218 - 32410.531: 99.2976% ( 7) 00:10:03.990 32410.531 - 32648.844: 99.3406% ( 6) 00:10:03.990 32648.844 - 32887.156: 99.3764% ( 5) 00:10:03.990 32887.156 - 33125.469: 99.4266% ( 7) 00:10:03.990 33125.469 - 33363.782: 99.4768% ( 7) 00:10:03.990 33363.782 - 33602.095: 99.5198% ( 6) 00:10:03.990 33602.095 - 33840.407: 99.5628% ( 6) 00:10:03.990 33840.407 - 34078.720: 99.6130% ( 7) 00:10:03.990 34078.720 - 34317.033: 99.6631% ( 7) 00:10:03.990 34317.033 - 34555.345: 99.6990% ( 5) 00:10:03.990 34555.345 - 34793.658: 99.7635% ( 9) 00:10:03.990 34793.658 - 35031.971: 99.8065% ( 6) 00:10:03.990 35031.971 - 35270.284: 99.8567% ( 7) 00:10:03.990 35270.284 - 35508.596: 99.9068% ( 7) 00:10:03.990 35508.596 - 35746.909: 99.9570% ( 7) 00:10:03.990 35746.909 - 35985.222: 100.0000% ( 6) 00:10:03.990 00:10:03.990 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:03.990 ============================================================================== 00:10:03.990 Range in us Cumulative IO count 00:10:03.990 6970.647 - 7000.436: 0.0143% ( 2) 00:10:03.990 7000.436 - 7030.225: 0.0358% ( 3) 00:10:03.990 7030.225 - 7060.015: 0.0645% ( 4) 00:10:03.990 7060.015 - 7089.804: 0.1075% ( 6) 00:10:03.990 7089.804 - 7119.593: 0.1720% ( 9) 00:10:03.990 7119.593 - 7149.382: 0.2437% ( 10) 00:10:03.990 7149.382 - 7179.171: 0.3225% ( 11) 00:10:03.990 7179.171 - 7208.960: 0.4659% ( 20) 00:10:03.990 7208.960 - 7238.749: 0.7167% ( 35) 00:10:03.990 7238.749 - 7268.538: 0.9891% ( 38) 00:10:03.990 7268.538 - 7298.327: 1.2973% ( 43) 00:10:03.990 7298.327 - 7328.116: 1.6772% ( 53) 00:10:03.990 7328.116 - 7357.905: 2.0857% ( 57) 00:10:03.990 7357.905 - 7387.695: 2.6161% ( 74) 00:10:03.990 7387.695 - 7417.484: 3.1035% ( 68) 00:10:03.990 7417.484 - 7447.273: 3.6697% ( 79) 00:10:03.990 7447.273 - 7477.062: 4.3220% ( 91) 00:10:03.990 7477.062 - 7506.851: 4.9957% ( 94) 00:10:03.990 7506.851 - 7536.640: 5.6766% ( 95) 00:10:03.990 7536.640 - 7566.429: 6.3718% ( 97) 00:10:03.990 7566.429 - 7596.218: 7.0312% ( 92) 00:10:03.990 7596.218 - 7626.007: 7.7122% ( 95) 00:10:03.990 7626.007 - 7685.585: 9.1958% ( 207) 00:10:03.990 7685.585 - 7745.164: 10.6436% ( 202) 00:10:03.990 7745.164 - 7804.742: 12.1488% ( 210) 00:10:03.990 7804.742 - 7864.320: 13.6970% ( 216) 00:10:03.990 7864.320 - 7923.898: 15.2093% ( 211) 00:10:03.990 7923.898 - 7983.476: 16.7790% ( 219) 00:10:03.990 7983.476 - 8043.055: 18.3630% ( 221) 00:10:03.990 8043.055 - 8102.633: 19.9541% ( 222) 00:10:03.990 8102.633 - 8162.211: 21.6241% ( 233) 00:10:03.990 8162.211 - 8221.789: 23.4088% ( 249) 00:10:03.990 8221.789 - 8281.367: 25.3297% ( 268) 00:10:03.990 8281.367 - 8340.945: 27.4441% ( 295) 00:10:03.990 8340.945 - 8400.524: 29.6588% ( 309) 00:10:03.990 8400.524 - 8460.102: 31.9811% ( 324) 00:10:03.990 8460.102 - 8519.680: 34.5040% ( 352) 00:10:03.990 8519.680 - 8579.258: 37.1560% ( 370) 00:10:03.990 8579.258 - 8638.836: 39.8724% ( 379) 00:10:03.990 8638.836 - 8698.415: 42.6319% ( 385) 00:10:03.990 8698.415 - 8757.993: 45.3412% ( 378) 00:10:03.990 8757.993 - 8817.571: 48.1150% ( 387) 00:10:03.990 8817.571 - 8877.149: 50.7956% ( 374) 00:10:03.990 8877.149 - 8936.727: 53.5192% ( 380) 00:10:03.991 8936.727 - 8996.305: 56.1640% ( 369) 00:10:03.991 8996.305 - 9055.884: 58.6583% ( 348) 00:10:03.991 9055.884 - 9115.462: 61.1669% ( 350) 00:10:03.991 9115.462 - 9175.040: 63.4748% ( 322) 00:10:03.991 9175.040 - 9234.618: 65.5318% ( 287) 00:10:03.991 9234.618 - 9294.196: 67.5674% ( 284) 00:10:03.991 9294.196 - 9353.775: 69.3951% ( 255) 00:10:03.991 9353.775 - 9413.353: 71.1368% ( 243) 00:10:03.991 9413.353 - 9472.931: 72.6921% ( 217) 00:10:03.991 9472.931 - 9532.509: 74.2259% ( 214) 00:10:03.991 9532.509 - 9592.087: 75.6666% ( 201) 00:10:03.991 9592.087 - 9651.665: 76.9997% ( 186) 00:10:03.991 9651.665 - 9711.244: 78.3902% ( 194) 00:10:03.991 9711.244 - 9770.822: 79.7162% ( 185) 00:10:03.991 9770.822 - 9830.400: 81.0206% ( 182) 00:10:03.991 9830.400 - 9889.978: 82.3323% ( 183) 00:10:03.991 9889.978 - 9949.556: 83.6511% ( 184) 00:10:03.991 9949.556 - 10009.135: 84.8911% ( 173) 00:10:03.991 10009.135 - 10068.713: 86.0952% ( 168) 00:10:03.991 10068.713 - 10128.291: 87.3136% ( 170) 00:10:03.991 10128.291 - 10187.869: 88.3744% ( 148) 00:10:03.991 10187.869 - 10247.447: 89.4925% ( 156) 00:10:03.991 10247.447 - 10307.025: 90.4960% ( 140) 00:10:03.991 10307.025 - 10366.604: 91.3776% ( 123) 00:10:03.991 10366.604 - 10426.182: 92.1947% ( 114) 00:10:03.991 10426.182 - 10485.760: 92.8756% ( 95) 00:10:03.991 10485.760 - 10545.338: 93.4705% ( 83) 00:10:03.991 10545.338 - 10604.916: 94.0080% ( 75) 00:10:03.991 10604.916 - 10664.495: 94.4452% ( 61) 00:10:03.991 10664.495 - 10724.073: 94.7678% ( 45) 00:10:03.991 10724.073 - 10783.651: 95.0831% ( 44) 00:10:03.991 10783.651 - 10843.229: 95.2910% ( 29) 00:10:03.991 10843.229 - 10902.807: 95.4917% ( 28) 00:10:03.991 10902.807 - 10962.385: 95.6709% ( 25) 00:10:03.991 10962.385 - 11021.964: 95.7856% ( 16) 00:10:03.991 11021.964 - 11081.542: 95.8787% ( 13) 00:10:03.991 11081.542 - 11141.120: 95.9719% ( 13) 00:10:03.991 11141.120 - 11200.698: 96.0722% ( 14) 00:10:03.991 11200.698 - 11260.276: 96.1583% ( 12) 00:10:03.991 11260.276 - 11319.855: 96.2658% ( 15) 00:10:03.991 11319.855 - 11379.433: 96.3733% ( 15) 00:10:03.991 11379.433 - 11439.011: 96.4665% ( 13) 00:10:03.991 11439.011 - 11498.589: 96.5596% ( 13) 00:10:03.991 11498.589 - 11558.167: 96.6456% ( 12) 00:10:03.991 11558.167 - 11617.745: 96.7388% ( 13) 00:10:03.991 11617.745 - 11677.324: 96.8320% ( 13) 00:10:03.991 11677.324 - 11736.902: 96.9395% ( 15) 00:10:03.991 11736.902 - 11796.480: 97.0255% ( 12) 00:10:03.991 11796.480 - 11856.058: 97.1259% ( 14) 00:10:03.991 11856.058 - 11915.636: 97.2047% ( 11) 00:10:03.991 11915.636 - 11975.215: 97.2835% ( 11) 00:10:03.991 11975.215 - 12034.793: 97.3409% ( 8) 00:10:03.991 12034.793 - 12094.371: 97.3839% ( 6) 00:10:03.991 12094.371 - 12153.949: 97.4341% ( 7) 00:10:03.991 12153.949 - 12213.527: 97.4986% ( 9) 00:10:03.991 12213.527 - 12273.105: 97.5487% ( 7) 00:10:03.991 12273.105 - 12332.684: 97.6132% ( 9) 00:10:03.991 12332.684 - 12392.262: 97.6849% ( 10) 00:10:03.991 12392.262 - 12451.840: 97.7351% ( 7) 00:10:03.991 12451.840 - 12511.418: 97.8139% ( 11) 00:10:03.991 12511.418 - 12570.996: 97.8569% ( 6) 00:10:03.991 12570.996 - 12630.575: 97.9286% ( 10) 00:10:03.991 12630.575 - 12690.153: 97.9931% ( 9) 00:10:03.991 12690.153 - 12749.731: 98.0648% ( 10) 00:10:03.991 12749.731 - 12809.309: 98.1221% ( 8) 00:10:03.991 12809.309 - 12868.887: 98.1723% ( 7) 00:10:03.991 12868.887 - 12928.465: 98.2225% ( 7) 00:10:03.991 12928.465 - 12988.044: 98.2726% ( 7) 00:10:03.991 12988.044 - 13047.622: 98.3157% ( 6) 00:10:03.991 13047.622 - 13107.200: 98.3587% ( 6) 00:10:03.991 13107.200 - 13166.778: 98.4088% ( 7) 00:10:03.991 13166.778 - 13226.356: 98.4590% ( 7) 00:10:03.991 13226.356 - 13285.935: 98.5163% ( 8) 00:10:03.991 13285.935 - 13345.513: 98.5665% ( 7) 00:10:03.991 13345.513 - 13405.091: 98.6167% ( 7) 00:10:03.991 13405.091 - 13464.669: 98.6740% ( 8) 00:10:03.991 13464.669 - 13524.247: 98.7170% ( 6) 00:10:03.991 13524.247 - 13583.825: 98.7457% ( 4) 00:10:03.991 13583.825 - 13643.404: 98.7744% ( 4) 00:10:03.991 13643.404 - 13702.982: 98.8030% ( 4) 00:10:03.991 13702.982 - 13762.560: 98.8317% ( 4) 00:10:03.991 13762.560 - 13822.138: 98.8604% ( 4) 00:10:03.991 13822.138 - 13881.716: 98.8890% ( 4) 00:10:03.991 13881.716 - 13941.295: 98.9177% ( 4) 00:10:03.991 13941.295 - 14000.873: 98.9464% ( 4) 00:10:03.991 14000.873 - 14060.451: 98.9751% ( 4) 00:10:03.991 14060.451 - 14120.029: 99.0037% ( 4) 00:10:03.991 14120.029 - 14179.607: 99.0324% ( 4) 00:10:03.991 14179.607 - 14239.185: 99.0539% ( 3) 00:10:03.991 14239.185 - 14298.764: 99.0826% ( 4) 00:10:03.991 29431.622 - 29550.778: 99.1041% ( 3) 00:10:03.991 29550.778 - 29669.935: 99.1256% ( 3) 00:10:03.991 29669.935 - 29789.091: 99.1471% ( 3) 00:10:03.991 29789.091 - 29908.247: 99.1686% ( 3) 00:10:03.991 29908.247 - 30027.404: 99.1972% ( 4) 00:10:03.991 30027.404 - 30146.560: 99.2188% ( 3) 00:10:03.991 30146.560 - 30265.716: 99.2403% ( 3) 00:10:03.991 30265.716 - 30384.873: 99.2689% ( 4) 00:10:03.991 30384.873 - 30504.029: 99.2904% ( 3) 00:10:03.991 30504.029 - 30742.342: 99.3334% ( 6) 00:10:03.991 30742.342 - 30980.655: 99.3836% ( 7) 00:10:03.991 30980.655 - 31218.967: 99.4194% ( 5) 00:10:03.991 31218.967 - 31457.280: 99.4696% ( 7) 00:10:03.991 31457.280 - 31695.593: 99.5198% ( 7) 00:10:03.991 31695.593 - 31933.905: 99.5700% ( 7) 00:10:03.991 31933.905 - 32172.218: 99.6130% ( 6) 00:10:03.991 32172.218 - 32410.531: 99.6631% ( 7) 00:10:03.991 32410.531 - 32648.844: 99.7133% ( 7) 00:10:03.991 32648.844 - 32887.156: 99.7563% ( 6) 00:10:03.991 32887.156 - 33125.469: 99.8065% ( 7) 00:10:03.991 33125.469 - 33363.782: 99.8567% ( 7) 00:10:03.991 33363.782 - 33602.095: 99.9068% ( 7) 00:10:03.991 33602.095 - 33840.407: 99.9570% ( 7) 00:10:03.991 33840.407 - 34078.720: 100.0000% ( 6) 00:10:03.991 00:10:03.991 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:03.991 ============================================================================== 00:10:03.991 Range in us Cumulative IO count 00:10:03.991 6970.647 - 7000.436: 0.0143% ( 2) 00:10:03.991 7000.436 - 7030.225: 0.0502% ( 5) 00:10:03.991 7030.225 - 7060.015: 0.0717% ( 3) 00:10:03.991 7060.015 - 7089.804: 0.1218% ( 7) 00:10:03.991 7089.804 - 7119.593: 0.1792% ( 8) 00:10:03.991 7119.593 - 7149.382: 0.2652% ( 12) 00:10:03.991 7149.382 - 7179.171: 0.4085% ( 20) 00:10:03.991 7179.171 - 7208.960: 0.6021% ( 27) 00:10:03.991 7208.960 - 7238.749: 0.8243% ( 31) 00:10:03.991 7238.749 - 7268.538: 1.1468% ( 45) 00:10:03.991 7268.538 - 7298.327: 1.5052% ( 50) 00:10:03.991 7298.327 - 7328.116: 1.9782% ( 66) 00:10:03.991 7328.116 - 7357.905: 2.4513% ( 66) 00:10:03.991 7357.905 - 7387.695: 2.9817% ( 74) 00:10:03.991 7387.695 - 7417.484: 3.5694% ( 82) 00:10:03.991 7417.484 - 7447.273: 4.1284% ( 78) 00:10:03.991 7447.273 - 7477.062: 4.7520% ( 87) 00:10:03.991 7477.062 - 7506.851: 5.4257% ( 94) 00:10:03.991 7506.851 - 7536.640: 6.1425% ( 100) 00:10:03.991 7536.640 - 7566.429: 6.8306% ( 96) 00:10:03.991 7566.429 - 7596.218: 7.5473% ( 100) 00:10:03.991 7596.218 - 7626.007: 8.2784% ( 102) 00:10:03.991 7626.007 - 7685.585: 9.7405% ( 204) 00:10:03.991 7685.585 - 7745.164: 11.2314% ( 208) 00:10:03.991 7745.164 - 7804.742: 12.7795% ( 216) 00:10:03.991 7804.742 - 7864.320: 14.2704% ( 208) 00:10:03.991 7864.320 - 7923.898: 15.7468% ( 206) 00:10:03.991 7923.898 - 7983.476: 17.2663% ( 212) 00:10:03.991 7983.476 - 8043.055: 18.8002% ( 214) 00:10:03.991 8043.055 - 8102.633: 20.3842% ( 221) 00:10:03.991 8102.633 - 8162.211: 22.0255% ( 229) 00:10:03.991 8162.211 - 8221.789: 23.7959% ( 247) 00:10:03.991 8221.789 - 8281.367: 25.6881% ( 264) 00:10:03.991 8281.367 - 8340.945: 27.7165% ( 283) 00:10:03.991 8340.945 - 8400.524: 29.8524% ( 298) 00:10:03.991 8400.524 - 8460.102: 32.1531% ( 321) 00:10:03.991 8460.102 - 8519.680: 34.5757% ( 338) 00:10:03.991 8519.680 - 8579.258: 37.2276% ( 370) 00:10:03.991 8579.258 - 8638.836: 39.8509% ( 366) 00:10:03.991 8638.836 - 8698.415: 42.5172% ( 372) 00:10:03.991 8698.415 - 8757.993: 45.2050% ( 375) 00:10:03.991 8757.993 - 8817.571: 47.8784% ( 373) 00:10:03.991 8817.571 - 8877.149: 50.4372% ( 357) 00:10:03.991 8877.149 - 8936.727: 53.0318% ( 362) 00:10:03.991 8936.727 - 8996.305: 55.5763% ( 355) 00:10:03.991 8996.305 - 9055.884: 58.0204% ( 341) 00:10:03.991 9055.884 - 9115.462: 60.5003% ( 346) 00:10:03.991 9115.462 - 9175.040: 62.7437% ( 313) 00:10:03.991 9175.040 - 9234.618: 64.8581% ( 295) 00:10:03.991 9234.618 - 9294.196: 66.9438% ( 291) 00:10:03.991 9294.196 - 9353.775: 68.7500% ( 252) 00:10:03.991 9353.775 - 9413.353: 70.4702% ( 240) 00:10:03.991 9413.353 - 9472.931: 71.9753% ( 210) 00:10:03.991 9472.931 - 9532.509: 73.4303% ( 203) 00:10:03.991 9532.509 - 9592.087: 74.7706% ( 187) 00:10:03.991 9592.087 - 9651.665: 76.0966% ( 185) 00:10:03.992 9651.665 - 9711.244: 77.4369% ( 187) 00:10:03.992 9711.244 - 9770.822: 78.7486% ( 183) 00:10:03.992 9770.822 - 9830.400: 80.0745% ( 185) 00:10:03.992 9830.400 - 9889.978: 81.4005% ( 185) 00:10:03.992 9889.978 - 9949.556: 82.7193% ( 184) 00:10:03.992 9949.556 - 10009.135: 84.0596% ( 187) 00:10:03.992 10009.135 - 10068.713: 85.3139% ( 175) 00:10:03.992 10068.713 - 10128.291: 86.5037% ( 166) 00:10:03.992 10128.291 - 10187.869: 87.6505% ( 160) 00:10:03.992 10187.869 - 10247.447: 88.7185% ( 149) 00:10:03.992 10247.447 - 10307.025: 89.6861% ( 135) 00:10:03.992 10307.025 - 10366.604: 90.5748% ( 124) 00:10:03.992 10366.604 - 10426.182: 91.4349% ( 120) 00:10:03.992 10426.182 - 10485.760: 92.1588% ( 101) 00:10:03.992 10485.760 - 10545.338: 92.8899% ( 102) 00:10:03.992 10545.338 - 10604.916: 93.5063% ( 86) 00:10:03.992 10604.916 - 10664.495: 93.9579% ( 63) 00:10:03.992 10664.495 - 10724.073: 94.4022% ( 62) 00:10:03.992 10724.073 - 10783.651: 94.7749% ( 52) 00:10:03.992 10783.651 - 10843.229: 95.0831% ( 43) 00:10:03.992 10843.229 - 10902.807: 95.3483% ( 37) 00:10:03.992 10902.807 - 10962.385: 95.5705% ( 31) 00:10:03.992 10962.385 - 11021.964: 95.7712% ( 28) 00:10:03.992 11021.964 - 11081.542: 95.9289% ( 22) 00:10:03.992 11081.542 - 11141.120: 96.1009% ( 24) 00:10:03.992 11141.120 - 11200.698: 96.2658% ( 23) 00:10:03.992 11200.698 - 11260.276: 96.4163% ( 21) 00:10:03.992 11260.276 - 11319.855: 96.5883% ( 24) 00:10:03.992 11319.855 - 11379.433: 96.7317% ( 20) 00:10:03.992 11379.433 - 11439.011: 96.8678% ( 19) 00:10:03.992 11439.011 - 11498.589: 97.0040% ( 19) 00:10:03.992 11498.589 - 11558.167: 97.1474% ( 20) 00:10:03.992 11558.167 - 11617.745: 97.2835% ( 19) 00:10:03.992 11617.745 - 11677.324: 97.4126% ( 18) 00:10:03.992 11677.324 - 11736.902: 97.5344% ( 17) 00:10:03.992 11736.902 - 11796.480: 97.6419% ( 15) 00:10:03.992 11796.480 - 11856.058: 97.7566% ( 16) 00:10:03.992 11856.058 - 11915.636: 97.8856% ( 18) 00:10:03.992 11915.636 - 11975.215: 98.0003% ( 16) 00:10:03.992 11975.215 - 12034.793: 98.1150% ( 16) 00:10:03.992 12034.793 - 12094.371: 98.2296% ( 16) 00:10:03.992 12094.371 - 12153.949: 98.2942% ( 9) 00:10:03.992 12153.949 - 12213.527: 98.3658% ( 10) 00:10:03.992 12213.527 - 12273.105: 98.4303% ( 9) 00:10:03.992 12273.105 - 12332.684: 98.4805% ( 7) 00:10:03.992 12332.684 - 12392.262: 98.5092% ( 4) 00:10:03.992 12392.262 - 12451.840: 98.5450% ( 5) 00:10:03.992 12451.840 - 12511.418: 98.5808% ( 5) 00:10:03.992 12511.418 - 12570.996: 98.6239% ( 6) 00:10:03.992 12570.996 - 12630.575: 98.6525% ( 4) 00:10:03.992 12630.575 - 12690.153: 98.6884% ( 5) 00:10:03.992 12690.153 - 12749.731: 98.7099% ( 3) 00:10:03.992 12749.731 - 12809.309: 98.7314% ( 3) 00:10:03.992 12809.309 - 12868.887: 98.7529% ( 3) 00:10:03.992 12868.887 - 12928.465: 98.7672% ( 2) 00:10:03.992 12928.465 - 12988.044: 98.7887% ( 3) 00:10:03.992 12988.044 - 13047.622: 98.8030% ( 2) 00:10:03.992 13047.622 - 13107.200: 98.8245% ( 3) 00:10:03.992 13107.200 - 13166.778: 98.8389% ( 2) 00:10:03.992 13166.778 - 13226.356: 98.8532% ( 2) 00:10:03.992 13226.356 - 13285.935: 98.8747% ( 3) 00:10:03.992 13285.935 - 13345.513: 98.8890% ( 2) 00:10:03.992 13345.513 - 13405.091: 98.9034% ( 2) 00:10:03.992 13405.091 - 13464.669: 98.9177% ( 2) 00:10:03.992 13464.669 - 13524.247: 98.9392% ( 3) 00:10:03.992 13524.247 - 13583.825: 98.9536% ( 2) 00:10:03.992 13583.825 - 13643.404: 98.9751% ( 3) 00:10:03.992 13643.404 - 13702.982: 98.9894% ( 2) 00:10:03.992 13702.982 - 13762.560: 99.0109% ( 3) 00:10:03.992 13762.560 - 13822.138: 99.0324% ( 3) 00:10:03.992 13822.138 - 13881.716: 99.0467% ( 2) 00:10:03.992 13881.716 - 13941.295: 99.0682% ( 3) 00:10:03.992 13941.295 - 14000.873: 99.0826% ( 2) 00:10:03.992 28478.371 - 28597.527: 99.0969% ( 2) 00:10:03.992 28597.527 - 28716.684: 99.1184% ( 3) 00:10:03.992 28716.684 - 28835.840: 99.1471% ( 4) 00:10:03.992 28835.840 - 28954.996: 99.1614% ( 2) 00:10:03.992 28954.996 - 29074.153: 99.1829% ( 3) 00:10:03.992 29074.153 - 29193.309: 99.2044% ( 3) 00:10:03.992 29193.309 - 29312.465: 99.2331% ( 4) 00:10:03.992 29312.465 - 29431.622: 99.2546% ( 3) 00:10:03.992 29431.622 - 29550.778: 99.2761% ( 3) 00:10:03.992 29550.778 - 29669.935: 99.2904% ( 2) 00:10:03.992 29669.935 - 29789.091: 99.3119% ( 3) 00:10:03.992 29789.091 - 29908.247: 99.3334% ( 3) 00:10:03.992 29908.247 - 30027.404: 99.3549% ( 3) 00:10:03.992 30027.404 - 30146.560: 99.3764% ( 3) 00:10:03.992 30146.560 - 30265.716: 99.3979% ( 3) 00:10:03.992 30265.716 - 30384.873: 99.4194% ( 3) 00:10:03.992 30384.873 - 30504.029: 99.4481% ( 4) 00:10:03.992 30504.029 - 30742.342: 99.4911% ( 6) 00:10:03.992 30742.342 - 30980.655: 99.5341% ( 6) 00:10:03.992 30980.655 - 31218.967: 99.5771% ( 6) 00:10:03.992 31218.967 - 31457.280: 99.6201% ( 6) 00:10:03.992 31457.280 - 31695.593: 99.6631% ( 6) 00:10:03.992 31695.593 - 31933.905: 99.7061% ( 6) 00:10:03.992 31933.905 - 32172.218: 99.7563% ( 7) 00:10:03.992 32172.218 - 32410.531: 99.7993% ( 6) 00:10:03.992 32410.531 - 32648.844: 99.8423% ( 6) 00:10:03.992 32648.844 - 32887.156: 99.8925% ( 7) 00:10:03.992 32887.156 - 33125.469: 99.9355% ( 6) 00:10:03.992 33125.469 - 33363.782: 99.9785% ( 6) 00:10:03.992 33363.782 - 33602.095: 100.0000% ( 3) 00:10:03.992 00:10:03.992 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:03.992 ============================================================================== 00:10:03.992 Range in us Cumulative IO count 00:10:03.992 6970.647 - 7000.436: 0.0072% ( 1) 00:10:03.992 7000.436 - 7030.225: 0.0215% ( 2) 00:10:03.992 7030.225 - 7060.015: 0.0358% ( 2) 00:10:03.992 7060.015 - 7089.804: 0.0717% ( 5) 00:10:03.992 7089.804 - 7119.593: 0.1362% ( 9) 00:10:03.992 7119.593 - 7149.382: 0.2294% ( 13) 00:10:03.992 7149.382 - 7179.171: 0.3440% ( 16) 00:10:03.992 7179.171 - 7208.960: 0.4802% ( 19) 00:10:03.992 7208.960 - 7238.749: 0.6451% ( 23) 00:10:03.992 7238.749 - 7268.538: 0.8386% ( 27) 00:10:03.992 7268.538 - 7298.327: 1.1253% ( 40) 00:10:03.992 7298.327 - 7328.116: 1.5123% ( 54) 00:10:03.992 7328.116 - 7357.905: 1.9352% ( 59) 00:10:03.992 7357.905 - 7387.695: 2.4728% ( 75) 00:10:03.992 7387.695 - 7417.484: 3.0390% ( 79) 00:10:03.992 7417.484 - 7447.273: 3.6124% ( 80) 00:10:03.992 7447.273 - 7477.062: 4.3005% ( 96) 00:10:03.992 7477.062 - 7506.851: 4.9814% ( 95) 00:10:03.992 7506.851 - 7536.640: 5.6838% ( 98) 00:10:03.992 7536.640 - 7566.429: 6.4865% ( 112) 00:10:03.992 7566.429 - 7596.218: 7.1961% ( 99) 00:10:03.992 7596.218 - 7626.007: 7.9415% ( 104) 00:10:03.992 7626.007 - 7685.585: 9.4037% ( 204) 00:10:03.992 7685.585 - 7745.164: 10.9518% ( 216) 00:10:03.992 7745.164 - 7804.742: 12.4498% ( 209) 00:10:03.992 7804.742 - 7864.320: 13.9120% ( 204) 00:10:03.992 7864.320 - 7923.898: 15.4745% ( 218) 00:10:03.992 7923.898 - 7983.476: 16.9796% ( 210) 00:10:03.992 7983.476 - 8043.055: 18.5350% ( 217) 00:10:03.992 8043.055 - 8102.633: 20.0473% ( 211) 00:10:03.992 8102.633 - 8162.211: 21.5668% ( 212) 00:10:03.992 8162.211 - 8221.789: 23.2511% ( 235) 00:10:03.992 8221.789 - 8281.367: 25.1290% ( 262) 00:10:03.992 8281.367 - 8340.945: 27.0069% ( 262) 00:10:03.992 8340.945 - 8400.524: 29.1714% ( 302) 00:10:03.992 8400.524 - 8460.102: 31.5439% ( 331) 00:10:03.992 8460.102 - 8519.680: 34.0668% ( 352) 00:10:03.992 8519.680 - 8579.258: 36.7116% ( 369) 00:10:03.992 8579.258 - 8638.836: 39.3349% ( 366) 00:10:03.992 8638.836 - 8698.415: 42.0728% ( 382) 00:10:03.992 8698.415 - 8757.993: 44.8323% ( 385) 00:10:03.992 8757.993 - 8817.571: 47.4842% ( 370) 00:10:03.992 8817.571 - 8877.149: 50.2079% ( 380) 00:10:03.992 8877.149 - 8936.727: 52.7953% ( 361) 00:10:03.992 8936.727 - 8996.305: 55.4257% ( 367) 00:10:03.992 8996.305 - 9055.884: 58.0132% ( 361) 00:10:03.992 9055.884 - 9115.462: 60.4286% ( 337) 00:10:03.992 9115.462 - 9175.040: 62.7365% ( 322) 00:10:03.992 9175.040 - 9234.618: 64.8796% ( 299) 00:10:03.992 9234.618 - 9294.196: 66.8793% ( 279) 00:10:03.992 9294.196 - 9353.775: 68.7787% ( 265) 00:10:03.992 9353.775 - 9413.353: 70.4487% ( 233) 00:10:03.992 9413.353 - 9472.931: 71.9538% ( 210) 00:10:03.992 9472.931 - 9532.509: 73.4303% ( 206) 00:10:03.992 9532.509 - 9592.087: 74.7993% ( 191) 00:10:03.992 9592.087 - 9651.665: 76.1110% ( 183) 00:10:03.992 9651.665 - 9711.244: 77.4728% ( 190) 00:10:03.992 9711.244 - 9770.822: 78.7916% ( 184) 00:10:03.992 9770.822 - 9830.400: 80.1319% ( 187) 00:10:03.992 9830.400 - 9889.978: 81.4650% ( 186) 00:10:03.992 9889.978 - 9949.556: 82.8340% ( 191) 00:10:03.992 9949.556 - 10009.135: 84.1600% ( 185) 00:10:03.992 10009.135 - 10068.713: 85.5146% ( 189) 00:10:03.992 10068.713 - 10128.291: 86.7833% ( 177) 00:10:03.992 10128.291 - 10187.869: 87.9874% ( 168) 00:10:03.992 10187.869 - 10247.447: 89.0123% ( 143) 00:10:03.992 10247.447 - 10307.025: 89.9799% ( 135) 00:10:03.992 10307.025 - 10366.604: 90.8830% ( 126) 00:10:03.992 10366.604 - 10426.182: 91.6858% ( 112) 00:10:03.992 10426.182 - 10485.760: 92.4312% ( 104) 00:10:03.992 10485.760 - 10545.338: 93.0978% ( 93) 00:10:03.992 10545.338 - 10604.916: 93.6783% ( 81) 00:10:03.992 10604.916 - 10664.495: 94.1657% ( 68) 00:10:03.992 10664.495 - 10724.073: 94.5384% ( 52) 00:10:03.992 10724.073 - 10783.651: 94.8968% ( 50) 00:10:03.993 10783.651 - 10843.229: 95.2122% ( 44) 00:10:03.993 10843.229 - 10902.807: 95.4558% ( 34) 00:10:03.993 10902.807 - 10962.385: 95.6924% ( 33) 00:10:03.993 10962.385 - 11021.964: 95.9146% ( 31) 00:10:03.993 11021.964 - 11081.542: 96.1081% ( 27) 00:10:03.993 11081.542 - 11141.120: 96.2729% ( 23) 00:10:03.993 11141.120 - 11200.698: 96.4593% ( 26) 00:10:03.993 11200.698 - 11260.276: 96.6456% ( 26) 00:10:03.993 11260.276 - 11319.855: 96.8105% ( 23) 00:10:03.993 11319.855 - 11379.433: 96.9610% ( 21) 00:10:03.993 11379.433 - 11439.011: 97.0757% ( 16) 00:10:03.993 11439.011 - 11498.589: 97.1904% ( 16) 00:10:03.993 11498.589 - 11558.167: 97.2692% ( 11) 00:10:03.993 11558.167 - 11617.745: 97.3624% ( 13) 00:10:03.993 11617.745 - 11677.324: 97.4412% ( 11) 00:10:03.993 11677.324 - 11736.902: 97.5487% ( 15) 00:10:03.993 11736.902 - 11796.480: 97.6347% ( 12) 00:10:03.993 11796.480 - 11856.058: 97.7208% ( 12) 00:10:03.993 11856.058 - 11915.636: 97.7924% ( 10) 00:10:03.993 11915.636 - 11975.215: 97.8498% ( 8) 00:10:03.993 11975.215 - 12034.793: 97.9143% ( 9) 00:10:03.993 12034.793 - 12094.371: 97.9716% ( 8) 00:10:03.993 12094.371 - 12153.949: 98.0361% ( 9) 00:10:03.993 12153.949 - 12213.527: 98.1006% ( 9) 00:10:03.993 12213.527 - 12273.105: 98.1508% ( 7) 00:10:03.993 12273.105 - 12332.684: 98.2153% ( 9) 00:10:03.993 12332.684 - 12392.262: 98.2655% ( 7) 00:10:03.993 12392.262 - 12451.840: 98.3300% ( 9) 00:10:03.993 12451.840 - 12511.418: 98.4017% ( 10) 00:10:03.993 12511.418 - 12570.996: 98.4590% ( 8) 00:10:03.993 12570.996 - 12630.575: 98.5163% ( 8) 00:10:03.993 12630.575 - 12690.153: 98.5737% ( 8) 00:10:03.993 12690.153 - 12749.731: 98.6382% ( 9) 00:10:03.993 12749.731 - 12809.309: 98.6955% ( 8) 00:10:03.993 12809.309 - 12868.887: 98.7457% ( 7) 00:10:03.993 12868.887 - 12928.465: 98.8030% ( 8) 00:10:03.993 12928.465 - 12988.044: 98.8675% ( 9) 00:10:03.993 12988.044 - 13047.622: 98.9249% ( 8) 00:10:03.993 13047.622 - 13107.200: 98.9894% ( 9) 00:10:03.993 13107.200 - 13166.778: 99.0181% ( 4) 00:10:03.993 13166.778 - 13226.356: 99.0324% ( 2) 00:10:03.993 13226.356 - 13285.935: 99.0467% ( 2) 00:10:03.993 13285.935 - 13345.513: 99.0682% ( 3) 00:10:03.993 13345.513 - 13405.091: 99.0826% ( 2) 00:10:03.993 26691.025 - 26810.182: 99.1112% ( 4) 00:10:03.993 26810.182 - 26929.338: 99.1327% ( 3) 00:10:03.993 26929.338 - 27048.495: 99.1542% ( 3) 00:10:03.993 27048.495 - 27167.651: 99.1757% ( 3) 00:10:03.993 27167.651 - 27286.807: 99.1972% ( 3) 00:10:03.993 27286.807 - 27405.964: 99.2188% ( 3) 00:10:03.993 27405.964 - 27525.120: 99.2403% ( 3) 00:10:03.993 27525.120 - 27644.276: 99.2618% ( 3) 00:10:03.993 27644.276 - 27763.433: 99.2904% ( 4) 00:10:03.993 27763.433 - 27882.589: 99.3048% ( 2) 00:10:03.993 27882.589 - 28001.745: 99.3334% ( 4) 00:10:03.993 28001.745 - 28120.902: 99.3549% ( 3) 00:10:03.993 28120.902 - 28240.058: 99.3764% ( 3) 00:10:03.993 28240.058 - 28359.215: 99.3979% ( 3) 00:10:03.993 28359.215 - 28478.371: 99.4194% ( 3) 00:10:03.993 28478.371 - 28597.527: 99.4409% ( 3) 00:10:03.993 28597.527 - 28716.684: 99.4624% ( 3) 00:10:03.993 28716.684 - 28835.840: 99.4839% ( 3) 00:10:03.993 28835.840 - 28954.996: 99.5126% ( 4) 00:10:03.993 28954.996 - 29074.153: 99.5341% ( 3) 00:10:03.993 29074.153 - 29193.309: 99.5556% ( 3) 00:10:03.993 29193.309 - 29312.465: 99.5771% ( 3) 00:10:03.993 29312.465 - 29431.622: 99.6058% ( 4) 00:10:03.993 29431.622 - 29550.778: 99.6273% ( 3) 00:10:03.993 29550.778 - 29669.935: 99.6488% ( 3) 00:10:03.993 29669.935 - 29789.091: 99.6703% ( 3) 00:10:03.993 29789.091 - 29908.247: 99.6918% ( 3) 00:10:03.993 29908.247 - 30027.404: 99.7133% ( 3) 00:10:03.993 30027.404 - 30146.560: 99.7348% ( 3) 00:10:03.993 30146.560 - 30265.716: 99.7635% ( 4) 00:10:03.993 30265.716 - 30384.873: 99.7850% ( 3) 00:10:03.993 30384.873 - 30504.029: 99.8065% ( 3) 00:10:03.993 30504.029 - 30742.342: 99.8495% ( 6) 00:10:03.993 30742.342 - 30980.655: 99.8925% ( 6) 00:10:03.993 30980.655 - 31218.967: 99.9355% ( 6) 00:10:03.993 31218.967 - 31457.280: 99.9857% ( 7) 00:10:03.993 31457.280 - 31695.593: 100.0000% ( 2) 00:10:03.993 00:10:03.993 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:03.993 ============================================================================== 00:10:03.993 Range in us Cumulative IO count 00:10:03.993 6851.491 - 6881.280: 0.0072% ( 1) 00:10:03.993 6881.280 - 6911.069: 0.0215% ( 2) 00:10:03.993 6911.069 - 6940.858: 0.0430% ( 3) 00:10:03.993 6940.858 - 6970.647: 0.0573% ( 2) 00:10:03.993 6970.647 - 7000.436: 0.0717% ( 2) 00:10:03.993 7000.436 - 7030.225: 0.0860% ( 2) 00:10:03.993 7030.225 - 7060.015: 0.1003% ( 2) 00:10:03.993 7060.015 - 7089.804: 0.1362% ( 5) 00:10:03.993 7089.804 - 7119.593: 0.2150% ( 11) 00:10:03.993 7119.593 - 7149.382: 0.2939% ( 11) 00:10:03.993 7149.382 - 7179.171: 0.4014% ( 15) 00:10:03.993 7179.171 - 7208.960: 0.5232% ( 17) 00:10:03.993 7208.960 - 7238.749: 0.7096% ( 26) 00:10:03.993 7238.749 - 7268.538: 0.9174% ( 29) 00:10:03.993 7268.538 - 7298.327: 1.1611% ( 34) 00:10:03.993 7298.327 - 7328.116: 1.5123% ( 49) 00:10:03.993 7328.116 - 7357.905: 1.8922% ( 53) 00:10:03.993 7357.905 - 7387.695: 2.4154% ( 73) 00:10:03.993 7387.695 - 7417.484: 2.9888% ( 80) 00:10:03.993 7417.484 - 7447.273: 3.5981% ( 85) 00:10:03.993 7447.273 - 7477.062: 4.1929% ( 83) 00:10:03.993 7477.062 - 7506.851: 4.8524% ( 92) 00:10:03.993 7506.851 - 7536.640: 5.5834% ( 102) 00:10:03.993 7536.640 - 7566.429: 6.3002% ( 100) 00:10:03.993 7566.429 - 7596.218: 7.0886% ( 110) 00:10:03.993 7596.218 - 7626.007: 7.7767% ( 96) 00:10:03.993 7626.007 - 7685.585: 9.2747% ( 209) 00:10:03.993 7685.585 - 7745.164: 10.7440% ( 205) 00:10:03.993 7745.164 - 7804.742: 12.2133% ( 205) 00:10:03.993 7804.742 - 7864.320: 13.7113% ( 209) 00:10:03.993 7864.320 - 7923.898: 15.2165% ( 210) 00:10:03.993 7923.898 - 7983.476: 16.7360% ( 212) 00:10:03.993 7983.476 - 8043.055: 18.2913% ( 217) 00:10:03.993 8043.055 - 8102.633: 19.8466% ( 217) 00:10:03.993 8102.633 - 8162.211: 21.3804% ( 214) 00:10:03.993 8162.211 - 8221.789: 23.0648% ( 235) 00:10:03.993 8221.789 - 8281.367: 24.8853% ( 254) 00:10:03.993 8281.367 - 8340.945: 26.8779% ( 278) 00:10:03.993 8340.945 - 8400.524: 29.1499% ( 317) 00:10:03.993 8400.524 - 8460.102: 31.5009% ( 328) 00:10:03.993 8460.102 - 8519.680: 34.0095% ( 350) 00:10:03.993 8519.680 - 8579.258: 36.6829% ( 373) 00:10:03.993 8579.258 - 8638.836: 39.3420% ( 371) 00:10:03.993 8638.836 - 8698.415: 42.0155% ( 373) 00:10:03.993 8698.415 - 8757.993: 44.6674% ( 370) 00:10:03.993 8757.993 - 8817.571: 47.3050% ( 368) 00:10:03.993 8817.571 - 8877.149: 49.9713% ( 372) 00:10:03.993 8877.149 - 8936.727: 52.6089% ( 368) 00:10:03.993 8936.727 - 8996.305: 55.1892% ( 360) 00:10:03.993 8996.305 - 9055.884: 57.7408% ( 356) 00:10:03.993 9055.884 - 9115.462: 60.1849% ( 341) 00:10:03.993 9115.462 - 9175.040: 62.4427% ( 315) 00:10:03.993 9175.040 - 9234.618: 64.5571% ( 295) 00:10:03.993 9234.618 - 9294.196: 66.5568% ( 279) 00:10:03.993 9294.196 - 9353.775: 68.4920% ( 270) 00:10:03.993 9353.775 - 9413.353: 70.2552% ( 246) 00:10:03.993 9413.353 - 9472.931: 71.8678% ( 225) 00:10:03.993 9472.931 - 9532.509: 73.3802% ( 211) 00:10:03.993 9532.509 - 9592.087: 74.7993% ( 198) 00:10:03.993 9592.087 - 9651.665: 76.2256% ( 199) 00:10:03.993 9651.665 - 9711.244: 77.6448% ( 198) 00:10:03.993 9711.244 - 9770.822: 79.0424% ( 195) 00:10:03.993 9770.822 - 9830.400: 80.4544% ( 197) 00:10:03.993 9830.400 - 9889.978: 81.8736% ( 198) 00:10:03.993 9889.978 - 9949.556: 83.2640% ( 194) 00:10:03.993 9949.556 - 10009.135: 84.6044% ( 187) 00:10:03.993 10009.135 - 10068.713: 85.9590% ( 189) 00:10:03.993 10068.713 - 10128.291: 87.1846% ( 171) 00:10:03.993 10128.291 - 10187.869: 88.3744% ( 166) 00:10:03.993 10187.869 - 10247.447: 89.4280% ( 147) 00:10:03.993 10247.447 - 10307.025: 90.4530% ( 143) 00:10:03.993 10307.025 - 10366.604: 91.3919% ( 131) 00:10:03.993 10366.604 - 10426.182: 92.2162% ( 115) 00:10:03.993 10426.182 - 10485.760: 92.9257% ( 99) 00:10:03.993 10485.760 - 10545.338: 93.5135% ( 82) 00:10:03.993 10545.338 - 10604.916: 94.0295% ( 72) 00:10:03.993 10604.916 - 10664.495: 94.4954% ( 65) 00:10:03.993 10664.495 - 10724.073: 94.8753% ( 53) 00:10:03.993 10724.073 - 10783.651: 95.2265% ( 49) 00:10:03.993 10783.651 - 10843.229: 95.5634% ( 47) 00:10:03.993 10843.229 - 10902.807: 95.8214% ( 36) 00:10:03.993 10902.807 - 10962.385: 96.0938% ( 38) 00:10:03.993 10962.385 - 11021.964: 96.2873% ( 27) 00:10:03.993 11021.964 - 11081.542: 96.4450% ( 22) 00:10:03.993 11081.542 - 11141.120: 96.5668% ( 17) 00:10:03.993 11141.120 - 11200.698: 96.6671% ( 14) 00:10:03.993 11200.698 - 11260.276: 96.7747% ( 15) 00:10:03.993 11260.276 - 11319.855: 96.8750% ( 14) 00:10:03.993 11319.855 - 11379.433: 96.9610% ( 12) 00:10:03.993 11379.433 - 11439.011: 97.0112% ( 7) 00:10:03.993 11439.011 - 11498.589: 97.0685% ( 8) 00:10:03.993 11498.589 - 11558.167: 97.1259% ( 8) 00:10:03.993 11558.167 - 11617.745: 97.1760% ( 7) 00:10:03.993 11617.745 - 11677.324: 97.2405% ( 9) 00:10:03.993 11677.324 - 11736.902: 97.2979% ( 8) 00:10:03.993 11736.902 - 11796.480: 97.3481% ( 7) 00:10:03.993 11796.480 - 11856.058: 97.4054% ( 8) 00:10:03.993 11856.058 - 11915.636: 97.4556% ( 7) 00:10:03.993 11915.636 - 11975.215: 97.5416% ( 12) 00:10:03.993 11975.215 - 12034.793: 97.6204% ( 11) 00:10:03.993 12034.793 - 12094.371: 97.7064% ( 12) 00:10:03.994 12094.371 - 12153.949: 97.7781% ( 10) 00:10:03.994 12153.949 - 12213.527: 97.8498% ( 10) 00:10:03.994 12213.527 - 12273.105: 97.8999% ( 7) 00:10:03.994 12273.105 - 12332.684: 97.9429% ( 6) 00:10:03.994 12332.684 - 12392.262: 97.9788% ( 5) 00:10:03.994 12392.262 - 12451.840: 98.0218% ( 6) 00:10:03.994 12451.840 - 12511.418: 98.0576% ( 5) 00:10:03.994 12511.418 - 12570.996: 98.1078% ( 7) 00:10:03.994 12570.996 - 12630.575: 98.1436% ( 5) 00:10:03.994 12630.575 - 12690.153: 98.1938% ( 7) 00:10:03.994 12690.153 - 12749.731: 98.2296% ( 5) 00:10:03.994 12749.731 - 12809.309: 98.2798% ( 7) 00:10:03.994 12809.309 - 12868.887: 98.3157% ( 5) 00:10:03.994 12868.887 - 12928.465: 98.3658% ( 7) 00:10:03.994 12928.465 - 12988.044: 98.4160% ( 7) 00:10:03.994 12988.044 - 13047.622: 98.4518% ( 5) 00:10:03.994 13047.622 - 13107.200: 98.5020% ( 7) 00:10:03.994 13107.200 - 13166.778: 98.5378% ( 5) 00:10:03.994 13166.778 - 13226.356: 98.5808% ( 6) 00:10:03.994 13226.356 - 13285.935: 98.6239% ( 6) 00:10:03.994 13285.935 - 13345.513: 98.6597% ( 5) 00:10:03.994 13345.513 - 13405.091: 98.7099% ( 7) 00:10:03.994 13405.091 - 13464.669: 98.7529% ( 6) 00:10:03.994 13464.669 - 13524.247: 98.7887% ( 5) 00:10:03.994 13524.247 - 13583.825: 98.8389% ( 7) 00:10:03.994 13583.825 - 13643.404: 98.8747% ( 5) 00:10:03.994 13643.404 - 13702.982: 98.9249% ( 7) 00:10:03.994 13702.982 - 13762.560: 98.9536% ( 4) 00:10:03.994 13762.560 - 13822.138: 98.9822% ( 4) 00:10:03.994 13822.138 - 13881.716: 99.0109% ( 4) 00:10:03.994 13881.716 - 13941.295: 99.0324% ( 3) 00:10:03.994 13941.295 - 14000.873: 99.0611% ( 4) 00:10:03.994 14000.873 - 14060.451: 99.0826% ( 3) 00:10:03.994 24784.524 - 24903.680: 99.0969% ( 2) 00:10:03.994 24903.680 - 25022.836: 99.1256% ( 4) 00:10:03.994 25022.836 - 25141.993: 99.1471% ( 3) 00:10:03.994 25141.993 - 25261.149: 99.1686% ( 3) 00:10:03.994 25261.149 - 25380.305: 99.1901% ( 3) 00:10:03.994 25380.305 - 25499.462: 99.2116% ( 3) 00:10:03.994 25499.462 - 25618.618: 99.2331% ( 3) 00:10:03.994 25618.618 - 25737.775: 99.2546% ( 3) 00:10:03.994 25737.775 - 25856.931: 99.2833% ( 4) 00:10:03.994 25856.931 - 25976.087: 99.3048% ( 3) 00:10:03.994 25976.087 - 26095.244: 99.3263% ( 3) 00:10:03.994 26095.244 - 26214.400: 99.3478% ( 3) 00:10:03.994 26214.400 - 26333.556: 99.3764% ( 4) 00:10:03.994 26333.556 - 26452.713: 99.3979% ( 3) 00:10:03.994 26452.713 - 26571.869: 99.4194% ( 3) 00:10:03.994 26571.869 - 26691.025: 99.4409% ( 3) 00:10:03.994 26691.025 - 26810.182: 99.4696% ( 4) 00:10:03.994 26810.182 - 26929.338: 99.4839% ( 2) 00:10:03.994 26929.338 - 27048.495: 99.5054% ( 3) 00:10:03.994 27048.495 - 27167.651: 99.5269% ( 3) 00:10:03.994 27167.651 - 27286.807: 99.5485% ( 3) 00:10:03.994 27286.807 - 27405.964: 99.5771% ( 4) 00:10:03.994 27405.964 - 27525.120: 99.5986% ( 3) 00:10:03.994 27525.120 - 27644.276: 99.6130% ( 2) 00:10:03.994 27644.276 - 27763.433: 99.6345% ( 3) 00:10:03.994 27763.433 - 27882.589: 99.6560% ( 3) 00:10:03.994 27882.589 - 28001.745: 99.6775% ( 3) 00:10:03.994 28001.745 - 28120.902: 99.6990% ( 3) 00:10:03.994 28120.902 - 28240.058: 99.7205% ( 3) 00:10:03.994 28240.058 - 28359.215: 99.7491% ( 4) 00:10:03.994 28359.215 - 28478.371: 99.7706% ( 3) 00:10:03.994 28478.371 - 28597.527: 99.7921% ( 3) 00:10:03.994 28597.527 - 28716.684: 99.8136% ( 3) 00:10:03.994 28716.684 - 28835.840: 99.8423% ( 4) 00:10:03.994 28835.840 - 28954.996: 99.8567% ( 2) 00:10:03.994 28954.996 - 29074.153: 99.8782% ( 3) 00:10:03.994 29074.153 - 29193.309: 99.9068% ( 4) 00:10:03.994 29193.309 - 29312.465: 99.9283% ( 3) 00:10:03.994 29312.465 - 29431.622: 99.9498% ( 3) 00:10:03.994 29431.622 - 29550.778: 99.9785% ( 4) 00:10:03.994 29550.778 - 29669.935: 100.0000% ( 3) 00:10:03.994 00:10:03.994 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:03.994 ============================================================================== 00:10:03.994 Range in us Cumulative IO count 00:10:03.994 6821.702 - 6851.491: 0.0143% ( 2) 00:10:03.994 6851.491 - 6881.280: 0.0287% ( 2) 00:10:03.994 6881.280 - 6911.069: 0.0430% ( 2) 00:10:03.994 6911.069 - 6940.858: 0.0573% ( 2) 00:10:03.994 6940.858 - 6970.647: 0.0717% ( 2) 00:10:03.994 6970.647 - 7000.436: 0.0860% ( 2) 00:10:03.994 7000.436 - 7030.225: 0.1075% ( 3) 00:10:03.994 7030.225 - 7060.015: 0.1290% ( 3) 00:10:03.994 7060.015 - 7089.804: 0.1577% ( 4) 00:10:03.994 7089.804 - 7119.593: 0.2007% ( 6) 00:10:03.994 7119.593 - 7149.382: 0.2939% ( 13) 00:10:03.994 7149.382 - 7179.171: 0.4229% ( 18) 00:10:03.994 7179.171 - 7208.960: 0.5662% ( 20) 00:10:03.994 7208.960 - 7238.749: 0.7669% ( 28) 00:10:03.994 7238.749 - 7268.538: 1.0178% ( 35) 00:10:03.994 7268.538 - 7298.327: 1.3331% ( 44) 00:10:03.994 7298.327 - 7328.116: 1.7417% ( 57) 00:10:03.994 7328.116 - 7357.905: 2.1717% ( 60) 00:10:03.994 7357.905 - 7387.695: 2.6735% ( 70) 00:10:03.994 7387.695 - 7417.484: 3.2182% ( 76) 00:10:03.994 7417.484 - 7447.273: 3.7772% ( 78) 00:10:03.994 7447.273 - 7477.062: 4.3363% ( 78) 00:10:03.994 7477.062 - 7506.851: 4.9742% ( 89) 00:10:03.994 7506.851 - 7536.640: 5.6694% ( 97) 00:10:03.994 7536.640 - 7566.429: 6.3002% ( 88) 00:10:03.994 7566.429 - 7596.218: 6.9882% ( 96) 00:10:03.994 7596.218 - 7626.007: 7.6978% ( 99) 00:10:03.994 7626.007 - 7685.585: 9.2173% ( 212) 00:10:03.994 7685.585 - 7745.164: 10.6221% ( 196) 00:10:03.994 7745.164 - 7804.742: 12.1416% ( 212) 00:10:03.994 7804.742 - 7864.320: 13.5966% ( 203) 00:10:03.994 7864.320 - 7923.898: 15.1161% ( 212) 00:10:03.994 7923.898 - 7983.476: 16.5496% ( 200) 00:10:03.994 7983.476 - 8043.055: 18.0763% ( 213) 00:10:03.994 8043.055 - 8102.633: 19.5599% ( 207) 00:10:03.994 8102.633 - 8162.211: 21.1941% ( 228) 00:10:03.994 8162.211 - 8221.789: 22.8498% ( 231) 00:10:03.994 8221.789 - 8281.367: 24.6416% ( 250) 00:10:03.994 8281.367 - 8340.945: 26.5768% ( 270) 00:10:03.994 8340.945 - 8400.524: 28.7629% ( 305) 00:10:03.994 8400.524 - 8460.102: 31.1998% ( 340) 00:10:03.994 8460.102 - 8519.680: 33.7873% ( 361) 00:10:03.994 8519.680 - 8579.258: 36.4392% ( 370) 00:10:03.994 8579.258 - 8638.836: 39.0912% ( 370) 00:10:03.994 8638.836 - 8698.415: 41.8506% ( 385) 00:10:03.994 8698.415 - 8757.993: 44.4596% ( 364) 00:10:03.994 8757.993 - 8817.571: 47.1832% ( 380) 00:10:03.994 8817.571 - 8877.149: 49.9642% ( 388) 00:10:03.994 8877.149 - 8936.727: 52.7021% ( 382) 00:10:03.994 8936.727 - 8996.305: 55.2681% ( 358) 00:10:03.994 8996.305 - 9055.884: 57.7982% ( 353) 00:10:03.994 9055.884 - 9115.462: 60.2781% ( 346) 00:10:03.994 9115.462 - 9175.040: 62.6003% ( 324) 00:10:03.994 9175.040 - 9234.618: 64.7936% ( 306) 00:10:03.994 9234.618 - 9294.196: 66.9008% ( 294) 00:10:03.994 9294.196 - 9353.775: 68.8503% ( 272) 00:10:03.994 9353.775 - 9413.353: 70.5060% ( 231) 00:10:03.994 9413.353 - 9472.931: 72.1259% ( 226) 00:10:03.994 9472.931 - 9532.509: 73.6167% ( 208) 00:10:03.994 9532.509 - 9592.087: 75.0502% ( 200) 00:10:03.994 9592.087 - 9651.665: 76.4192% ( 191) 00:10:03.994 9651.665 - 9711.244: 77.7953% ( 192) 00:10:03.994 9711.244 - 9770.822: 79.1284% ( 186) 00:10:03.994 9770.822 - 9830.400: 80.5046% ( 192) 00:10:03.994 9830.400 - 9889.978: 81.8664% ( 190) 00:10:03.994 9889.978 - 9949.556: 83.2784% ( 197) 00:10:03.994 9949.556 - 10009.135: 84.6474% ( 191) 00:10:03.994 10009.135 - 10068.713: 86.0020% ( 189) 00:10:03.994 10068.713 - 10128.291: 87.2921% ( 180) 00:10:03.994 10128.291 - 10187.869: 88.5178% ( 171) 00:10:03.994 10187.869 - 10247.447: 89.6359% ( 156) 00:10:03.994 10247.447 - 10307.025: 90.6178% ( 137) 00:10:03.994 10307.025 - 10366.604: 91.5138% ( 125) 00:10:03.995 10366.604 - 10426.182: 92.3954% ( 123) 00:10:03.995 10426.182 - 10485.760: 93.1551% ( 106) 00:10:03.995 10485.760 - 10545.338: 93.8145% ( 92) 00:10:03.995 10545.338 - 10604.916: 94.3091% ( 69) 00:10:03.995 10604.916 - 10664.495: 94.6961% ( 54) 00:10:03.995 10664.495 - 10724.073: 95.0401% ( 48) 00:10:03.995 10724.073 - 10783.651: 95.3483% ( 43) 00:10:03.995 10783.651 - 10843.229: 95.6494% ( 42) 00:10:03.995 10843.229 - 10902.807: 95.8644% ( 30) 00:10:03.995 10902.807 - 10962.385: 96.0364% ( 24) 00:10:03.995 10962.385 - 11021.964: 96.1654% ( 18) 00:10:03.995 11021.964 - 11081.542: 96.3016% ( 19) 00:10:03.995 11081.542 - 11141.120: 96.4163% ( 16) 00:10:03.995 11141.120 - 11200.698: 96.5453% ( 18) 00:10:03.995 11200.698 - 11260.276: 96.6600% ( 16) 00:10:03.995 11260.276 - 11319.855: 96.7675% ( 15) 00:10:03.995 11319.855 - 11379.433: 96.8607% ( 13) 00:10:03.995 11379.433 - 11439.011: 96.9180% ( 8) 00:10:03.995 11439.011 - 11498.589: 96.9538% ( 5) 00:10:03.995 11498.589 - 11558.167: 96.9897% ( 5) 00:10:03.995 11558.167 - 11617.745: 97.0255% ( 5) 00:10:03.995 11617.745 - 11677.324: 97.0614% ( 5) 00:10:03.995 11677.324 - 11736.902: 97.0972% ( 5) 00:10:03.995 11736.902 - 11796.480: 97.1402% ( 6) 00:10:03.995 11796.480 - 11856.058: 97.1689% ( 4) 00:10:03.995 11856.058 - 11915.636: 97.2047% ( 5) 00:10:03.995 11915.636 - 11975.215: 97.2405% ( 5) 00:10:03.995 11975.215 - 12034.793: 97.2835% ( 6) 00:10:03.995 12034.793 - 12094.371: 97.3337% ( 7) 00:10:03.995 12094.371 - 12153.949: 97.3982% ( 9) 00:10:03.995 12153.949 - 12213.527: 97.4627% ( 9) 00:10:03.995 12213.527 - 12273.105: 97.5344% ( 10) 00:10:03.995 12273.105 - 12332.684: 97.5989% ( 9) 00:10:03.995 12332.684 - 12392.262: 97.6562% ( 8) 00:10:03.995 12392.262 - 12451.840: 97.7208% ( 9) 00:10:03.995 12451.840 - 12511.418: 97.7853% ( 9) 00:10:03.995 12511.418 - 12570.996: 97.8426% ( 8) 00:10:03.995 12570.996 - 12630.575: 97.9071% ( 9) 00:10:03.995 12630.575 - 12690.153: 97.9573% ( 7) 00:10:03.995 12690.153 - 12749.731: 98.0290% ( 10) 00:10:03.995 12749.731 - 12809.309: 98.0791% ( 7) 00:10:03.995 12809.309 - 12868.887: 98.1436% ( 9) 00:10:03.995 12868.887 - 12928.465: 98.2010% ( 8) 00:10:03.995 12928.465 - 12988.044: 98.2440% ( 6) 00:10:03.995 12988.044 - 13047.622: 98.2942% ( 7) 00:10:03.995 13047.622 - 13107.200: 98.3300% ( 5) 00:10:03.995 13107.200 - 13166.778: 98.3730% ( 6) 00:10:03.995 13166.778 - 13226.356: 98.4160% ( 6) 00:10:03.995 13226.356 - 13285.935: 98.4590% ( 6) 00:10:03.995 13285.935 - 13345.513: 98.5020% ( 6) 00:10:03.995 13345.513 - 13405.091: 98.5450% ( 6) 00:10:03.995 13405.091 - 13464.669: 98.5952% ( 7) 00:10:03.995 13464.669 - 13524.247: 98.6382% ( 6) 00:10:03.995 13524.247 - 13583.825: 98.6740% ( 5) 00:10:03.995 13583.825 - 13643.404: 98.7242% ( 7) 00:10:03.995 13643.404 - 13702.982: 98.7600% ( 5) 00:10:03.995 13702.982 - 13762.560: 98.8102% ( 7) 00:10:03.995 13762.560 - 13822.138: 98.8460% ( 5) 00:10:03.995 13822.138 - 13881.716: 98.8962% ( 7) 00:10:03.995 13881.716 - 13941.295: 98.9392% ( 6) 00:10:03.995 13941.295 - 14000.873: 98.9822% ( 6) 00:10:03.995 14000.873 - 14060.451: 99.0252% ( 6) 00:10:03.995 14060.451 - 14120.029: 99.0467% ( 3) 00:10:03.995 14120.029 - 14179.607: 99.0754% ( 4) 00:10:03.995 14179.607 - 14239.185: 99.0826% ( 1) 00:10:03.995 22878.022 - 22997.178: 99.0897% ( 1) 00:10:03.995 22997.178 - 23116.335: 99.1112% ( 3) 00:10:03.995 23116.335 - 23235.491: 99.1327% ( 3) 00:10:03.995 23235.491 - 23354.647: 99.1542% ( 3) 00:10:03.995 23354.647 - 23473.804: 99.1757% ( 3) 00:10:03.995 23473.804 - 23592.960: 99.1972% ( 3) 00:10:03.995 23592.960 - 23712.116: 99.2188% ( 3) 00:10:03.995 23712.116 - 23831.273: 99.2403% ( 3) 00:10:03.995 23831.273 - 23950.429: 99.2689% ( 4) 00:10:03.995 23950.429 - 24069.585: 99.2904% ( 3) 00:10:03.995 24069.585 - 24188.742: 99.3119% ( 3) 00:10:03.995 24188.742 - 24307.898: 99.3334% ( 3) 00:10:03.995 24307.898 - 24427.055: 99.3549% ( 3) 00:10:03.995 24427.055 - 24546.211: 99.3836% ( 4) 00:10:03.995 24546.211 - 24665.367: 99.4051% ( 3) 00:10:03.995 24665.367 - 24784.524: 99.4266% ( 3) 00:10:03.995 24784.524 - 24903.680: 99.4409% ( 2) 00:10:03.995 24903.680 - 25022.836: 99.4696% ( 4) 00:10:03.995 25022.836 - 25141.993: 99.4911% ( 3) 00:10:03.995 25141.993 - 25261.149: 99.5126% ( 3) 00:10:03.995 25261.149 - 25380.305: 99.5341% ( 3) 00:10:03.995 25380.305 - 25499.462: 99.5628% ( 4) 00:10:03.995 25499.462 - 25618.618: 99.5771% ( 2) 00:10:03.995 25618.618 - 25737.775: 99.6058% ( 4) 00:10:03.995 25737.775 - 25856.931: 99.6273% ( 3) 00:10:03.995 25856.931 - 25976.087: 99.6488% ( 3) 00:10:03.995 25976.087 - 26095.244: 99.6703% ( 3) 00:10:03.995 26095.244 - 26214.400: 99.6990% ( 4) 00:10:03.995 26214.400 - 26333.556: 99.7205% ( 3) 00:10:03.995 26333.556 - 26452.713: 99.7420% ( 3) 00:10:03.995 26452.713 - 26571.869: 99.7635% ( 3) 00:10:03.995 26571.869 - 26691.025: 99.7921% ( 4) 00:10:03.995 26691.025 - 26810.182: 99.8136% ( 3) 00:10:03.995 26810.182 - 26929.338: 99.8351% ( 3) 00:10:03.995 26929.338 - 27048.495: 99.8567% ( 3) 00:10:03.995 27048.495 - 27167.651: 99.8782% ( 3) 00:10:03.995 27167.651 - 27286.807: 99.8997% ( 3) 00:10:03.995 27286.807 - 27405.964: 99.9283% ( 4) 00:10:03.995 27405.964 - 27525.120: 99.9498% ( 3) 00:10:03.995 27525.120 - 27644.276: 99.9713% ( 3) 00:10:03.995 27644.276 - 27763.433: 99.9928% ( 3) 00:10:03.995 27763.433 - 27882.589: 100.0000% ( 1) 00:10:03.995 00:10:03.995 09:47:57 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:05.370 Initializing NVMe Controllers 00:10:05.370 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:05.370 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:05.370 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:05.370 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:05.370 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:05.370 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:05.370 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:05.370 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:05.370 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:05.370 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:05.370 Initialization complete. Launching workers. 00:10:05.370 ======================================================== 00:10:05.370 Latency(us) 00:10:05.370 Device Information : IOPS MiB/s Average min max 00:10:05.370 PCIE (0000:00:06.0) NSID 1 from core 0: 11017.76 129.11 11611.49 8602.90 36148.94 00:10:05.370 PCIE (0000:00:07.0) NSID 1 from core 0: 11017.76 129.11 11598.62 8948.49 34715.84 00:10:05.370 PCIE (0000:00:09.0) NSID 1 from core 0: 11017.76 129.11 11583.97 8930.05 34242.79 00:10:05.370 PCIE (0000:00:08.0) NSID 1 from core 0: 11017.76 129.11 11569.12 9021.05 32691.85 00:10:05.370 PCIE (0000:00:08.0) NSID 2 from core 0: 11017.76 129.11 11554.39 8703.54 30912.75 00:10:05.370 PCIE (0000:00:08.0) NSID 3 from core 0: 11017.76 129.11 11540.19 8934.21 29021.73 00:10:05.370 ======================================================== 00:10:05.370 Total : 66106.54 774.69 11576.30 8602.90 36148.94 00:10:05.370 00:10:05.370 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:05.370 ================================================================================= 00:10:05.370 1.00000% : 9294.196us 00:10:05.370 10.00000% : 9949.556us 00:10:05.370 25.00000% : 10545.338us 00:10:05.370 50.00000% : 11319.855us 00:10:05.370 75.00000% : 12094.371us 00:10:05.370 90.00000% : 12809.309us 00:10:05.370 95.00000% : 13345.513us 00:10:05.370 98.00000% : 16086.109us 00:10:05.370 99.00000% : 31457.280us 00:10:05.370 99.50000% : 33840.407us 00:10:05.370 99.90000% : 35746.909us 00:10:05.370 99.99000% : 36223.535us 00:10:05.370 99.99900% : 36223.535us 00:10:05.370 99.99990% : 36223.535us 00:10:05.370 99.99999% : 36223.535us 00:10:05.370 00:10:05.370 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:05.370 ================================================================================= 00:10:05.370 1.00000% : 9472.931us 00:10:05.370 10.00000% : 10128.291us 00:10:05.370 25.00000% : 10664.495us 00:10:05.370 50.00000% : 11319.855us 00:10:05.370 75.00000% : 12034.793us 00:10:05.370 90.00000% : 12690.153us 00:10:05.370 95.00000% : 13166.778us 00:10:05.370 98.00000% : 16086.109us 00:10:05.371 99.00000% : 30265.716us 00:10:05.371 99.50000% : 32648.844us 00:10:05.371 99.90000% : 34317.033us 00:10:05.371 99.99000% : 34793.658us 00:10:05.371 99.99900% : 34793.658us 00:10:05.371 99.99990% : 34793.658us 00:10:05.371 99.99999% : 34793.658us 00:10:05.371 00:10:05.371 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:05.371 ================================================================================= 00:10:05.371 1.00000% : 9413.353us 00:10:05.371 10.00000% : 10068.713us 00:10:05.371 25.00000% : 10604.916us 00:10:05.371 50.00000% : 11319.855us 00:10:05.371 75.00000% : 12034.793us 00:10:05.371 90.00000% : 12690.153us 00:10:05.371 95.00000% : 13166.778us 00:10:05.371 98.00000% : 16324.422us 00:10:05.371 99.00000% : 29789.091us 00:10:05.371 99.50000% : 32172.218us 00:10:05.371 99.90000% : 33840.407us 00:10:05.371 99.99000% : 34317.033us 00:10:05.371 99.99900% : 34317.033us 00:10:05.371 99.99990% : 34317.033us 00:10:05.371 99.99999% : 34317.033us 00:10:05.371 00:10:05.371 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:05.371 ================================================================================= 00:10:05.371 1.00000% : 9413.353us 00:10:05.371 10.00000% : 10128.291us 00:10:05.371 25.00000% : 10664.495us 00:10:05.371 50.00000% : 11319.855us 00:10:05.371 75.00000% : 12034.793us 00:10:05.371 90.00000% : 12690.153us 00:10:05.371 95.00000% : 13107.200us 00:10:05.371 98.00000% : 16681.891us 00:10:05.371 99.00000% : 28478.371us 00:10:05.371 99.50000% : 30742.342us 00:10:05.371 99.90000% : 32410.531us 00:10:05.371 99.99000% : 32887.156us 00:10:05.371 99.99900% : 32887.156us 00:10:05.371 99.99990% : 32887.156us 00:10:05.371 99.99999% : 32887.156us 00:10:05.371 00:10:05.371 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:05.371 ================================================================================= 00:10:05.371 1.00000% : 9413.353us 00:10:05.371 10.00000% : 10128.291us 00:10:05.371 25.00000% : 10664.495us 00:10:05.371 50.00000% : 11319.855us 00:10:05.371 75.00000% : 12034.793us 00:10:05.371 90.00000% : 12690.153us 00:10:05.371 95.00000% : 13166.778us 00:10:05.371 98.00000% : 16801.047us 00:10:05.371 99.00000% : 26810.182us 00:10:05.371 99.50000% : 29074.153us 00:10:05.371 99.90000% : 30742.342us 00:10:05.371 99.99000% : 30980.655us 00:10:05.371 99.99900% : 30980.655us 00:10:05.371 99.99990% : 30980.655us 00:10:05.371 99.99999% : 30980.655us 00:10:05.371 00:10:05.371 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:05.371 ================================================================================= 00:10:05.371 1.00000% : 9472.931us 00:10:05.371 10.00000% : 10128.291us 00:10:05.371 25.00000% : 10664.495us 00:10:05.371 50.00000% : 11319.855us 00:10:05.371 75.00000% : 12034.793us 00:10:05.371 90.00000% : 12690.153us 00:10:05.371 95.00000% : 13166.778us 00:10:05.371 98.00000% : 16086.109us 00:10:05.371 99.00000% : 25261.149us 00:10:05.371 99.50000% : 27048.495us 00:10:05.371 99.90000% : 28716.684us 00:10:05.371 99.99000% : 29074.153us 00:10:05.371 99.99900% : 29074.153us 00:10:05.371 99.99990% : 29074.153us 00:10:05.371 99.99999% : 29074.153us 00:10:05.371 00:10:05.371 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:05.371 ============================================================================== 00:10:05.371 Range in us Cumulative IO count 00:10:05.371 8579.258 - 8638.836: 0.0359% ( 4) 00:10:05.371 8638.836 - 8698.415: 0.0449% ( 1) 00:10:05.371 8698.415 - 8757.993: 0.0718% ( 3) 00:10:05.371 8757.993 - 8817.571: 0.0988% ( 3) 00:10:05.371 8817.571 - 8877.149: 0.1167% ( 2) 00:10:05.371 8877.149 - 8936.727: 0.1796% ( 7) 00:10:05.371 8936.727 - 8996.305: 0.2245% ( 5) 00:10:05.371 8996.305 - 9055.884: 0.3323% ( 12) 00:10:05.371 9055.884 - 9115.462: 0.4400% ( 12) 00:10:05.371 9115.462 - 9175.040: 0.5657% ( 14) 00:10:05.371 9175.040 - 9234.618: 0.8890% ( 36) 00:10:05.371 9234.618 - 9294.196: 1.1764% ( 32) 00:10:05.371 9294.196 - 9353.775: 1.6254% ( 50) 00:10:05.371 9353.775 - 9413.353: 2.0384% ( 46) 00:10:05.371 9413.353 - 9472.931: 2.5593% ( 58) 00:10:05.371 9472.931 - 9532.509: 3.1340% ( 64) 00:10:05.371 9532.509 - 9592.087: 3.9332% ( 89) 00:10:05.371 9592.087 - 9651.665: 4.7234% ( 88) 00:10:05.371 9651.665 - 9711.244: 5.5585% ( 93) 00:10:05.371 9711.244 - 9770.822: 6.5104% ( 106) 00:10:05.371 9770.822 - 9830.400: 7.6329% ( 125) 00:10:05.371 9830.400 - 9889.978: 8.8631% ( 137) 00:10:05.371 9889.978 - 9949.556: 10.0844% ( 136) 00:10:05.371 9949.556 - 10009.135: 11.3685% ( 143) 00:10:05.371 10009.135 - 10068.713: 12.7155% ( 150) 00:10:05.371 10068.713 - 10128.291: 14.2331% ( 169) 00:10:05.371 10128.291 - 10187.869: 15.6879% ( 162) 00:10:05.371 10187.869 - 10247.447: 17.2504% ( 174) 00:10:05.371 10247.447 - 10307.025: 18.9027% ( 184) 00:10:05.371 10307.025 - 10366.604: 20.5460% ( 183) 00:10:05.371 10366.604 - 10426.182: 22.1624% ( 180) 00:10:05.371 10426.182 - 10485.760: 23.7338% ( 175) 00:10:05.371 10485.760 - 10545.338: 25.4580% ( 192) 00:10:05.371 10545.338 - 10604.916: 27.1372% ( 187) 00:10:05.371 10604.916 - 10664.495: 28.8973% ( 196) 00:10:05.371 10664.495 - 10724.073: 30.5945% ( 189) 00:10:05.371 10724.073 - 10783.651: 32.5970% ( 223) 00:10:05.371 10783.651 - 10843.229: 34.4019% ( 201) 00:10:05.371 10843.229 - 10902.807: 36.4494% ( 228) 00:10:05.371 10902.807 - 10962.385: 38.3621% ( 213) 00:10:05.371 10962.385 - 11021.964: 40.3556% ( 222) 00:10:05.371 11021.964 - 11081.542: 42.1785% ( 203) 00:10:05.371 11081.542 - 11141.120: 44.1272% ( 217) 00:10:05.371 11141.120 - 11200.698: 46.2195% ( 233) 00:10:05.371 11200.698 - 11260.276: 48.1501% ( 215) 00:10:05.371 11260.276 - 11319.855: 50.1886% ( 227) 00:10:05.371 11319.855 - 11379.433: 52.1642% ( 220) 00:10:05.371 11379.433 - 11439.011: 54.1218% ( 218) 00:10:05.371 11439.011 - 11498.589: 56.0884% ( 219) 00:10:05.371 11498.589 - 11558.167: 58.0819% ( 222) 00:10:05.371 11558.167 - 11617.745: 60.0754% ( 222) 00:10:05.371 11617.745 - 11677.324: 62.0600% ( 221) 00:10:05.371 11677.324 - 11736.902: 64.0086% ( 217) 00:10:05.371 11736.902 - 11796.480: 65.9573% ( 217) 00:10:05.371 11796.480 - 11856.058: 67.8251% ( 208) 00:10:05.371 11856.058 - 11915.636: 69.8635% ( 227) 00:10:05.371 11915.636 - 11975.215: 71.6505% ( 199) 00:10:05.371 11975.215 - 12034.793: 73.5183% ( 208) 00:10:05.371 12034.793 - 12094.371: 75.3143% ( 200) 00:10:05.371 12094.371 - 12153.949: 76.9576% ( 183) 00:10:05.371 12153.949 - 12213.527: 78.5201% ( 174) 00:10:05.371 12213.527 - 12273.105: 79.9210% ( 156) 00:10:05.371 12273.105 - 12332.684: 81.2949% ( 153) 00:10:05.371 12332.684 - 12392.262: 82.6239% ( 148) 00:10:05.371 12392.262 - 12451.840: 83.9619% ( 149) 00:10:05.371 12451.840 - 12511.418: 85.1922% ( 137) 00:10:05.371 12511.418 - 12570.996: 86.2967% ( 123) 00:10:05.371 12570.996 - 12630.575: 87.4731% ( 131) 00:10:05.371 12630.575 - 12690.153: 88.5147% ( 116) 00:10:05.371 12690.153 - 12749.731: 89.4846% ( 108) 00:10:05.371 12749.731 - 12809.309: 90.3107% ( 92) 00:10:05.371 12809.309 - 12868.887: 91.1099% ( 89) 00:10:05.371 12868.887 - 12928.465: 91.8014% ( 77) 00:10:05.371 12928.465 - 12988.044: 92.4928% ( 77) 00:10:05.371 12988.044 - 13047.622: 93.0585% ( 63) 00:10:05.371 13047.622 - 13107.200: 93.6063% ( 61) 00:10:05.371 13107.200 - 13166.778: 94.0374% ( 48) 00:10:05.371 13166.778 - 13226.356: 94.5312% ( 55) 00:10:05.371 13226.356 - 13285.935: 94.9443% ( 46) 00:10:05.371 13285.935 - 13345.513: 95.3574% ( 46) 00:10:05.371 13345.513 - 13405.091: 95.7166% ( 40) 00:10:05.371 13405.091 - 13464.669: 95.9950% ( 31) 00:10:05.371 13464.669 - 13524.247: 96.2644% ( 30) 00:10:05.371 13524.247 - 13583.825: 96.5427% ( 31) 00:10:05.371 13583.825 - 13643.404: 96.7313% ( 21) 00:10:05.371 13643.404 - 13702.982: 96.8930% ( 18) 00:10:05.371 13702.982 - 13762.560: 97.0995% ( 23) 00:10:05.371 13762.560 - 13822.138: 97.2342% ( 15) 00:10:05.371 13822.138 - 13881.716: 97.3330% ( 11) 00:10:05.371 13881.716 - 13941.295: 97.3869% ( 6) 00:10:05.371 13941.295 - 14000.873: 97.4407% ( 6) 00:10:05.371 14000.873 - 14060.451: 97.4856% ( 5) 00:10:05.371 14060.451 - 14120.029: 97.4946% ( 1) 00:10:05.371 14120.029 - 14179.607: 97.5216% ( 3) 00:10:05.371 14179.607 - 14239.185: 97.5485% ( 3) 00:10:05.371 14239.185 - 14298.764: 97.5754% ( 3) 00:10:05.371 14298.764 - 14358.342: 97.6024% ( 3) 00:10:05.371 14358.342 - 14417.920: 97.6203% ( 2) 00:10:05.371 14417.920 - 14477.498: 97.6473% ( 3) 00:10:05.371 14477.498 - 14537.076: 97.6652% ( 2) 00:10:05.371 14537.076 - 14596.655: 97.6922% ( 3) 00:10:05.371 14596.655 - 14656.233: 97.7011% ( 1) 00:10:05.371 15252.015 - 15371.171: 97.7820% ( 9) 00:10:05.371 15371.171 - 15490.327: 97.8179% ( 4) 00:10:05.371 15490.327 - 15609.484: 97.8538% ( 4) 00:10:05.371 15609.484 - 15728.640: 97.8987% ( 5) 00:10:05.371 15728.640 - 15847.796: 97.9436% ( 5) 00:10:05.371 15847.796 - 15966.953: 97.9975% ( 6) 00:10:05.371 15966.953 - 16086.109: 98.0424% ( 5) 00:10:05.371 16086.109 - 16205.265: 98.0783% ( 4) 00:10:05.371 16205.265 - 16324.422: 98.1142% ( 4) 00:10:05.371 16324.422 - 16443.578: 98.1591% ( 5) 00:10:05.371 16443.578 - 16562.735: 98.1950% ( 4) 00:10:05.371 16562.735 - 16681.891: 98.2399% ( 5) 00:10:05.371 16681.891 - 16801.047: 98.2669% ( 3) 00:10:05.371 16801.047 - 16920.204: 98.3118% ( 5) 00:10:05.372 16920.204 - 17039.360: 98.3567% ( 5) 00:10:05.372 17039.360 - 17158.516: 98.3926% ( 4) 00:10:05.372 17158.516 - 17277.673: 98.4375% ( 5) 00:10:05.372 17277.673 - 17396.829: 98.4824% ( 5) 00:10:05.372 17396.829 - 17515.985: 98.5273% ( 5) 00:10:05.372 17515.985 - 17635.142: 98.5722% ( 5) 00:10:05.372 17635.142 - 17754.298: 98.6261% ( 6) 00:10:05.372 17754.298 - 17873.455: 98.6710% ( 5) 00:10:05.372 17873.455 - 17992.611: 98.7159% ( 5) 00:10:05.372 17992.611 - 18111.767: 98.7518% ( 4) 00:10:05.372 18111.767 - 18230.924: 98.8057% ( 6) 00:10:05.372 18230.924 - 18350.080: 98.8416% ( 4) 00:10:05.372 18350.080 - 18469.236: 98.8506% ( 1) 00:10:05.372 30504.029 - 30742.342: 98.8775% ( 3) 00:10:05.372 30742.342 - 30980.655: 98.9224% ( 5) 00:10:05.372 30980.655 - 31218.967: 98.9673% ( 5) 00:10:05.372 31218.967 - 31457.280: 99.0032% ( 4) 00:10:05.372 31457.280 - 31695.593: 99.0571% ( 6) 00:10:05.372 31695.593 - 31933.905: 99.1200% ( 7) 00:10:05.372 31933.905 - 32172.218: 99.1649% ( 5) 00:10:05.372 32172.218 - 32410.531: 99.2098% ( 5) 00:10:05.372 32410.531 - 32648.844: 99.2547% ( 5) 00:10:05.372 32648.844 - 32887.156: 99.2816% ( 3) 00:10:05.372 32887.156 - 33125.469: 99.3534% ( 8) 00:10:05.372 33125.469 - 33363.782: 99.3983% ( 5) 00:10:05.372 33363.782 - 33602.095: 99.4612% ( 7) 00:10:05.372 33602.095 - 33840.407: 99.5061% ( 5) 00:10:05.372 33840.407 - 34078.720: 99.5510% ( 5) 00:10:05.372 34078.720 - 34317.033: 99.5959% ( 5) 00:10:05.372 34317.033 - 34555.345: 99.6588% ( 7) 00:10:05.372 34555.345 - 34793.658: 99.7037% ( 5) 00:10:05.372 34793.658 - 35031.971: 99.7575% ( 6) 00:10:05.372 35031.971 - 35270.284: 99.8114% ( 6) 00:10:05.372 35270.284 - 35508.596: 99.8563% ( 5) 00:10:05.372 35508.596 - 35746.909: 99.9192% ( 7) 00:10:05.372 35746.909 - 35985.222: 99.9731% ( 6) 00:10:05.372 35985.222 - 36223.535: 100.0000% ( 3) 00:10:05.372 00:10:05.372 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:05.372 ============================================================================== 00:10:05.372 Range in us Cumulative IO count 00:10:05.372 8936.727 - 8996.305: 0.0359% ( 4) 00:10:05.372 8996.305 - 9055.884: 0.0898% ( 6) 00:10:05.372 9055.884 - 9115.462: 0.1347% ( 5) 00:10:05.372 9115.462 - 9175.040: 0.1976% ( 7) 00:10:05.372 9175.040 - 9234.618: 0.3143% ( 13) 00:10:05.372 9234.618 - 9294.196: 0.4849% ( 19) 00:10:05.372 9294.196 - 9353.775: 0.6555% ( 19) 00:10:05.372 9353.775 - 9413.353: 0.7812% ( 14) 00:10:05.372 9413.353 - 9472.931: 1.0147% ( 26) 00:10:05.372 9472.931 - 9532.509: 1.2931% ( 31) 00:10:05.372 9532.509 - 9592.087: 1.7870% ( 55) 00:10:05.372 9592.087 - 9651.665: 2.4784% ( 77) 00:10:05.372 9651.665 - 9711.244: 3.2058% ( 81) 00:10:05.372 9711.244 - 9770.822: 4.0320% ( 92) 00:10:05.372 9770.822 - 9830.400: 5.0647% ( 115) 00:10:05.372 9830.400 - 9889.978: 6.1153% ( 117) 00:10:05.372 9889.978 - 9949.556: 7.1659% ( 117) 00:10:05.372 9949.556 - 10009.135: 8.3962% ( 137) 00:10:05.372 10009.135 - 10068.713: 9.5366% ( 127) 00:10:05.372 10068.713 - 10128.291: 10.9016% ( 152) 00:10:05.372 10128.291 - 10187.869: 12.3294% ( 159) 00:10:05.372 10187.869 - 10247.447: 13.9458% ( 180) 00:10:05.372 10247.447 - 10307.025: 15.5352% ( 177) 00:10:05.372 10307.025 - 10366.604: 17.1785% ( 183) 00:10:05.372 10366.604 - 10426.182: 18.9925% ( 202) 00:10:05.372 10426.182 - 10485.760: 20.8603% ( 208) 00:10:05.372 10485.760 - 10545.338: 22.8628% ( 223) 00:10:05.372 10545.338 - 10604.916: 24.7755% ( 213) 00:10:05.372 10604.916 - 10664.495: 26.7421% ( 219) 00:10:05.372 10664.495 - 10724.073: 28.7446% ( 223) 00:10:05.372 10724.073 - 10783.651: 30.6214% ( 209) 00:10:05.372 10783.651 - 10843.229: 32.6598% ( 227) 00:10:05.372 10843.229 - 10902.807: 34.7432% ( 232) 00:10:05.372 10902.807 - 10962.385: 36.9522% ( 246) 00:10:05.372 10962.385 - 11021.964: 39.1792% ( 248) 00:10:05.372 11021.964 - 11081.542: 41.5230% ( 261) 00:10:05.372 11081.542 - 11141.120: 43.9476% ( 270) 00:10:05.372 11141.120 - 11200.698: 46.2733% ( 259) 00:10:05.372 11200.698 - 11260.276: 48.4824% ( 246) 00:10:05.372 11260.276 - 11319.855: 50.7094% ( 248) 00:10:05.372 11319.855 - 11379.433: 52.9813% ( 253) 00:10:05.372 11379.433 - 11439.011: 55.2353% ( 251) 00:10:05.372 11439.011 - 11498.589: 57.4802% ( 250) 00:10:05.372 11498.589 - 11558.167: 59.7162% ( 249) 00:10:05.372 11558.167 - 11617.745: 61.9792% ( 252) 00:10:05.372 11617.745 - 11677.324: 64.2331% ( 251) 00:10:05.372 11677.324 - 11736.902: 66.3614% ( 237) 00:10:05.372 11736.902 - 11796.480: 68.4357% ( 231) 00:10:05.372 11796.480 - 11856.058: 70.5190% ( 232) 00:10:05.372 11856.058 - 11915.636: 72.6293% ( 235) 00:10:05.372 11915.636 - 11975.215: 74.5510% ( 214) 00:10:05.372 11975.215 - 12034.793: 76.4098% ( 207) 00:10:05.372 12034.793 - 12094.371: 78.0711% ( 185) 00:10:05.372 12094.371 - 12153.949: 79.6606% ( 177) 00:10:05.372 12153.949 - 12213.527: 81.1782% ( 169) 00:10:05.372 12213.527 - 12273.105: 82.5790% ( 156) 00:10:05.372 12273.105 - 12332.684: 83.9978% ( 158) 00:10:05.372 12332.684 - 12392.262: 85.2640% ( 141) 00:10:05.372 12392.262 - 12451.840: 86.4673% ( 134) 00:10:05.372 12451.840 - 12511.418: 87.6616% ( 133) 00:10:05.372 12511.418 - 12570.996: 88.6674% ( 112) 00:10:05.372 12570.996 - 12630.575: 89.6462% ( 109) 00:10:05.372 12630.575 - 12690.153: 90.5711% ( 103) 00:10:05.372 12690.153 - 12749.731: 91.2895% ( 80) 00:10:05.372 12749.731 - 12809.309: 91.9630% ( 75) 00:10:05.372 12809.309 - 12868.887: 92.6455% ( 76) 00:10:05.372 12868.887 - 12928.465: 93.2381% ( 66) 00:10:05.372 12928.465 - 12988.044: 93.7500% ( 57) 00:10:05.372 12988.044 - 13047.622: 94.2170% ( 52) 00:10:05.372 13047.622 - 13107.200: 94.6570% ( 49) 00:10:05.372 13107.200 - 13166.778: 95.0700% ( 46) 00:10:05.372 13166.778 - 13226.356: 95.4382% ( 41) 00:10:05.372 13226.356 - 13285.935: 95.7884% ( 39) 00:10:05.372 13285.935 - 13345.513: 96.0578% ( 30) 00:10:05.372 13345.513 - 13405.091: 96.2823% ( 25) 00:10:05.372 13405.091 - 13464.669: 96.5248% ( 27) 00:10:05.372 13464.669 - 13524.247: 96.7134% ( 21) 00:10:05.372 13524.247 - 13583.825: 96.8840% ( 19) 00:10:05.372 13583.825 - 13643.404: 97.0097% ( 14) 00:10:05.372 13643.404 - 13702.982: 97.1534% ( 16) 00:10:05.372 13702.982 - 13762.560: 97.2881% ( 15) 00:10:05.372 13762.560 - 13822.138: 97.3869% ( 11) 00:10:05.372 13822.138 - 13881.716: 97.4497% ( 7) 00:10:05.372 13881.716 - 13941.295: 97.5126% ( 7) 00:10:05.372 13941.295 - 14000.873: 97.5665% ( 6) 00:10:05.372 14000.873 - 14060.451: 97.6024% ( 4) 00:10:05.372 14060.451 - 14120.029: 97.6293% ( 3) 00:10:05.372 14120.029 - 14179.607: 97.6562% ( 3) 00:10:05.372 14179.607 - 14239.185: 97.6922% ( 4) 00:10:05.372 14239.185 - 14298.764: 97.7011% ( 1) 00:10:05.372 15609.484 - 15728.640: 97.7101% ( 1) 00:10:05.372 15728.640 - 15847.796: 97.7909% ( 9) 00:10:05.372 15847.796 - 15966.953: 97.9167% ( 14) 00:10:05.372 15966.953 - 16086.109: 98.0065% ( 10) 00:10:05.372 16086.109 - 16205.265: 98.0603% ( 6) 00:10:05.372 16205.265 - 16324.422: 98.0963% ( 4) 00:10:05.372 16324.422 - 16443.578: 98.1322% ( 4) 00:10:05.372 16443.578 - 16562.735: 98.1771% ( 5) 00:10:05.372 16562.735 - 16681.891: 98.2220% ( 5) 00:10:05.372 16681.891 - 16801.047: 98.2759% ( 6) 00:10:05.372 16801.047 - 16920.204: 98.3208% ( 5) 00:10:05.372 16920.204 - 17039.360: 98.3746% ( 6) 00:10:05.372 17039.360 - 17158.516: 98.4195% ( 5) 00:10:05.372 17158.516 - 17277.673: 98.4734% ( 6) 00:10:05.372 17277.673 - 17396.829: 98.5183% ( 5) 00:10:05.372 17396.829 - 17515.985: 98.5632% ( 5) 00:10:05.372 17515.985 - 17635.142: 98.6081% ( 5) 00:10:05.372 17635.142 - 17754.298: 98.6620% ( 6) 00:10:05.372 17754.298 - 17873.455: 98.7069% ( 5) 00:10:05.372 17873.455 - 17992.611: 98.7608% ( 6) 00:10:05.372 17992.611 - 18111.767: 98.8147% ( 6) 00:10:05.372 18111.767 - 18230.924: 98.8506% ( 4) 00:10:05.372 29550.778 - 29669.935: 98.8775% ( 3) 00:10:05.372 29669.935 - 29789.091: 98.9045% ( 3) 00:10:05.372 29789.091 - 29908.247: 98.9314% ( 3) 00:10:05.372 29908.247 - 30027.404: 98.9583% ( 3) 00:10:05.372 30027.404 - 30146.560: 98.9853% ( 3) 00:10:05.372 30146.560 - 30265.716: 99.0122% ( 3) 00:10:05.372 30265.716 - 30384.873: 99.0392% ( 3) 00:10:05.372 30384.873 - 30504.029: 99.0661% ( 3) 00:10:05.372 30504.029 - 30742.342: 99.1110% ( 5) 00:10:05.372 30742.342 - 30980.655: 99.1649% ( 6) 00:10:05.372 30980.655 - 31218.967: 99.2188% ( 6) 00:10:05.372 31218.967 - 31457.280: 99.2636% ( 5) 00:10:05.372 31457.280 - 31695.593: 99.3175% ( 6) 00:10:05.372 31695.593 - 31933.905: 99.3714% ( 6) 00:10:05.372 31933.905 - 32172.218: 99.4163% ( 5) 00:10:05.372 32172.218 - 32410.531: 99.4702% ( 6) 00:10:05.372 32410.531 - 32648.844: 99.5151% ( 5) 00:10:05.372 32648.844 - 32887.156: 99.5690% ( 6) 00:10:05.372 32887.156 - 33125.469: 99.6228% ( 6) 00:10:05.372 33125.469 - 33363.782: 99.6857% ( 7) 00:10:05.372 33363.782 - 33602.095: 99.7396% ( 6) 00:10:05.372 33602.095 - 33840.407: 99.7845% ( 5) 00:10:05.372 33840.407 - 34078.720: 99.8384% ( 6) 00:10:05.372 34078.720 - 34317.033: 99.9012% ( 7) 00:10:05.372 34317.033 - 34555.345: 99.9551% ( 6) 00:10:05.372 34555.345 - 34793.658: 100.0000% ( 5) 00:10:05.372 00:10:05.372 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:05.372 ============================================================================== 00:10:05.372 Range in us Cumulative IO count 00:10:05.372 8877.149 - 8936.727: 0.0090% ( 1) 00:10:05.372 8936.727 - 8996.305: 0.0539% ( 5) 00:10:05.373 8996.305 - 9055.884: 0.0988% ( 5) 00:10:05.373 9055.884 - 9115.462: 0.1616% ( 7) 00:10:05.373 9115.462 - 9175.040: 0.2245% ( 7) 00:10:05.373 9175.040 - 9234.618: 0.3323% ( 12) 00:10:05.373 9234.618 - 9294.196: 0.5298% ( 22) 00:10:05.373 9294.196 - 9353.775: 0.7992% ( 30) 00:10:05.373 9353.775 - 9413.353: 1.1764% ( 42) 00:10:05.373 9413.353 - 9472.931: 1.6254% ( 50) 00:10:05.373 9472.931 - 9532.509: 2.0115% ( 43) 00:10:05.373 9532.509 - 9592.087: 2.4156% ( 45) 00:10:05.373 9592.087 - 9651.665: 2.8556% ( 49) 00:10:05.373 9651.665 - 9711.244: 3.5471% ( 77) 00:10:05.373 9711.244 - 9770.822: 4.3822% ( 93) 00:10:05.373 9770.822 - 9830.400: 5.2622% ( 98) 00:10:05.373 9830.400 - 9889.978: 6.3847% ( 125) 00:10:05.373 9889.978 - 9949.556: 7.5521% ( 130) 00:10:05.373 9949.556 - 10009.135: 8.7913% ( 138) 00:10:05.373 10009.135 - 10068.713: 10.3358% ( 172) 00:10:05.373 10068.713 - 10128.291: 11.7098% ( 153) 00:10:05.373 10128.291 - 10187.869: 13.1286% ( 158) 00:10:05.373 10187.869 - 10247.447: 14.6821% ( 173) 00:10:05.373 10247.447 - 10307.025: 16.1997% ( 169) 00:10:05.373 10307.025 - 10366.604: 17.7622% ( 174) 00:10:05.373 10366.604 - 10426.182: 19.5133% ( 195) 00:10:05.373 10426.182 - 10485.760: 21.3631% ( 206) 00:10:05.373 10485.760 - 10545.338: 23.2938% ( 215) 00:10:05.373 10545.338 - 10604.916: 25.2604% ( 219) 00:10:05.373 10604.916 - 10664.495: 27.3797% ( 236) 00:10:05.373 10664.495 - 10724.073: 29.4450% ( 230) 00:10:05.373 10724.073 - 10783.651: 31.6092% ( 241) 00:10:05.373 10783.651 - 10843.229: 33.6566% ( 228) 00:10:05.373 10843.229 - 10902.807: 35.7489% ( 233) 00:10:05.373 10902.807 - 10962.385: 37.7784% ( 226) 00:10:05.373 10962.385 - 11021.964: 39.9964% ( 247) 00:10:05.373 11021.964 - 11081.542: 42.0169% ( 225) 00:10:05.373 11081.542 - 11141.120: 44.1541% ( 238) 00:10:05.373 11141.120 - 11200.698: 46.4260% ( 253) 00:10:05.373 11200.698 - 11260.276: 48.7249% ( 256) 00:10:05.373 11260.276 - 11319.855: 51.1135% ( 266) 00:10:05.373 11319.855 - 11379.433: 53.4932% ( 265) 00:10:05.373 11379.433 - 11439.011: 55.7651% ( 253) 00:10:05.373 11439.011 - 11498.589: 58.0819% ( 258) 00:10:05.373 11498.589 - 11558.167: 60.2550% ( 242) 00:10:05.373 11558.167 - 11617.745: 62.4371% ( 243) 00:10:05.373 11617.745 - 11677.324: 64.5115% ( 231) 00:10:05.373 11677.324 - 11736.902: 66.5858% ( 231) 00:10:05.373 11736.902 - 11796.480: 68.6153% ( 226) 00:10:05.373 11796.480 - 11856.058: 70.5999% ( 221) 00:10:05.373 11856.058 - 11915.636: 72.4767% ( 209) 00:10:05.373 11915.636 - 11975.215: 74.2816% ( 201) 00:10:05.373 11975.215 - 12034.793: 76.0955% ( 202) 00:10:05.373 12034.793 - 12094.371: 77.8107% ( 191) 00:10:05.373 12094.371 - 12153.949: 79.3912% ( 176) 00:10:05.373 12153.949 - 12213.527: 80.8459% ( 162) 00:10:05.373 12213.527 - 12273.105: 82.2198% ( 153) 00:10:05.373 12273.105 - 12332.684: 83.4950% ( 142) 00:10:05.373 12332.684 - 12392.262: 84.6803% ( 132) 00:10:05.373 12392.262 - 12451.840: 85.9106% ( 137) 00:10:05.373 12451.840 - 12511.418: 87.0510% ( 127) 00:10:05.373 12511.418 - 12570.996: 88.1645% ( 124) 00:10:05.373 12570.996 - 12630.575: 89.2421% ( 120) 00:10:05.373 12630.575 - 12690.153: 90.1850% ( 105) 00:10:05.373 12690.153 - 12749.731: 91.1009% ( 102) 00:10:05.373 12749.731 - 12809.309: 91.8912% ( 88) 00:10:05.373 12809.309 - 12868.887: 92.5557% ( 74) 00:10:05.373 12868.887 - 12928.465: 93.1483% ( 66) 00:10:05.373 12928.465 - 12988.044: 93.7320% ( 65) 00:10:05.373 12988.044 - 13047.622: 94.2170% ( 54) 00:10:05.373 13047.622 - 13107.200: 94.7198% ( 56) 00:10:05.373 13107.200 - 13166.778: 95.0790% ( 40) 00:10:05.373 13166.778 - 13226.356: 95.4472% ( 41) 00:10:05.373 13226.356 - 13285.935: 95.7795% ( 37) 00:10:05.373 13285.935 - 13345.513: 96.0040% ( 25) 00:10:05.373 13345.513 - 13405.091: 96.2464% ( 27) 00:10:05.373 13405.091 - 13464.669: 96.4529% ( 23) 00:10:05.373 13464.669 - 13524.247: 96.6595% ( 23) 00:10:05.373 13524.247 - 13583.825: 96.8211% ( 18) 00:10:05.373 13583.825 - 13643.404: 96.9648% ( 16) 00:10:05.373 13643.404 - 13702.982: 97.0815% ( 13) 00:10:05.373 13702.982 - 13762.560: 97.1713% ( 10) 00:10:05.373 13762.560 - 13822.138: 97.2701% ( 11) 00:10:05.373 13822.138 - 13881.716: 97.3509% ( 9) 00:10:05.373 13881.716 - 13941.295: 97.4318% ( 9) 00:10:05.373 13941.295 - 14000.873: 97.4946% ( 7) 00:10:05.373 14000.873 - 14060.451: 97.5575% ( 7) 00:10:05.373 14060.451 - 14120.029: 97.6114% ( 6) 00:10:05.373 14120.029 - 14179.607: 97.6562% ( 5) 00:10:05.373 14179.607 - 14239.185: 97.6832% ( 3) 00:10:05.373 14239.185 - 14298.764: 97.7011% ( 2) 00:10:05.373 15728.640 - 15847.796: 97.7730% ( 8) 00:10:05.373 15847.796 - 15966.953: 97.8987% ( 14) 00:10:05.373 15966.953 - 16086.109: 97.9526% ( 6) 00:10:05.373 16086.109 - 16205.265: 97.9975% ( 5) 00:10:05.373 16205.265 - 16324.422: 98.0424% ( 5) 00:10:05.373 16324.422 - 16443.578: 98.0873% ( 5) 00:10:05.373 16443.578 - 16562.735: 98.1412% ( 6) 00:10:05.373 16562.735 - 16681.891: 98.2040% ( 7) 00:10:05.373 16681.891 - 16801.047: 98.2579% ( 6) 00:10:05.373 16801.047 - 16920.204: 98.3118% ( 6) 00:10:05.373 16920.204 - 17039.360: 98.3746% ( 7) 00:10:05.373 17039.360 - 17158.516: 98.4106% ( 4) 00:10:05.373 17158.516 - 17277.673: 98.4644% ( 6) 00:10:05.373 17277.673 - 17396.829: 98.5093% ( 5) 00:10:05.373 17396.829 - 17515.985: 98.5632% ( 6) 00:10:05.373 17515.985 - 17635.142: 98.6081% ( 5) 00:10:05.373 17635.142 - 17754.298: 98.6620% ( 6) 00:10:05.373 17754.298 - 17873.455: 98.7159% ( 6) 00:10:05.373 17873.455 - 17992.611: 98.7787% ( 7) 00:10:05.373 17992.611 - 18111.767: 98.8236% ( 5) 00:10:05.373 18111.767 - 18230.924: 98.8506% ( 3) 00:10:05.373 28954.996 - 29074.153: 98.8685% ( 2) 00:10:05.373 29074.153 - 29193.309: 98.9045% ( 4) 00:10:05.373 29193.309 - 29312.465: 98.9224% ( 2) 00:10:05.373 29312.465 - 29431.622: 98.9404% ( 2) 00:10:05.373 29431.622 - 29550.778: 98.9583% ( 2) 00:10:05.373 29550.778 - 29669.935: 98.9853% ( 3) 00:10:05.373 29669.935 - 29789.091: 99.0122% ( 3) 00:10:05.373 29789.091 - 29908.247: 99.0392% ( 3) 00:10:05.373 29908.247 - 30027.404: 99.0661% ( 3) 00:10:05.373 30027.404 - 30146.560: 99.0930% ( 3) 00:10:05.373 30146.560 - 30265.716: 99.1110% ( 2) 00:10:05.373 30265.716 - 30384.873: 99.1379% ( 3) 00:10:05.373 30384.873 - 30504.029: 99.1649% ( 3) 00:10:05.373 30504.029 - 30742.342: 99.2188% ( 6) 00:10:05.373 30742.342 - 30980.655: 99.2636% ( 5) 00:10:05.373 30980.655 - 31218.967: 99.3085% ( 5) 00:10:05.373 31218.967 - 31457.280: 99.3624% ( 6) 00:10:05.373 31457.280 - 31695.593: 99.4163% ( 6) 00:10:05.373 31695.593 - 31933.905: 99.4792% ( 7) 00:10:05.373 31933.905 - 32172.218: 99.5241% ( 5) 00:10:05.373 32172.218 - 32410.531: 99.5779% ( 6) 00:10:05.373 32410.531 - 32648.844: 99.6318% ( 6) 00:10:05.373 32648.844 - 32887.156: 99.6857% ( 6) 00:10:05.373 32887.156 - 33125.469: 99.7396% ( 6) 00:10:05.373 33125.469 - 33363.782: 99.8024% ( 7) 00:10:05.373 33363.782 - 33602.095: 99.8563% ( 6) 00:10:05.373 33602.095 - 33840.407: 99.9102% ( 6) 00:10:05.373 33840.407 - 34078.720: 99.9641% ( 6) 00:10:05.373 34078.720 - 34317.033: 100.0000% ( 4) 00:10:05.373 00:10:05.373 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:05.373 ============================================================================== 00:10:05.373 Range in us Cumulative IO count 00:10:05.373 8996.305 - 9055.884: 0.0898% ( 10) 00:10:05.373 9055.884 - 9115.462: 0.2874% ( 22) 00:10:05.373 9115.462 - 9175.040: 0.4400% ( 17) 00:10:05.373 9175.040 - 9234.618: 0.5837% ( 16) 00:10:05.373 9234.618 - 9294.196: 0.6915% ( 12) 00:10:05.373 9294.196 - 9353.775: 0.8172% ( 14) 00:10:05.373 9353.775 - 9413.353: 1.0057% ( 21) 00:10:05.373 9413.353 - 9472.931: 1.2482% ( 27) 00:10:05.373 9472.931 - 9532.509: 1.6972% ( 50) 00:10:05.373 9532.509 - 9592.087: 2.3707% ( 75) 00:10:05.373 9592.087 - 9651.665: 3.0262% ( 73) 00:10:05.373 9651.665 - 9711.244: 3.7087% ( 76) 00:10:05.373 9711.244 - 9770.822: 4.4989% ( 88) 00:10:05.373 9770.822 - 9830.400: 5.4777% ( 109) 00:10:05.373 9830.400 - 9889.978: 6.4476% ( 108) 00:10:05.373 9889.978 - 9949.556: 7.5431% ( 122) 00:10:05.373 9949.556 - 10009.135: 8.7733% ( 137) 00:10:05.373 10009.135 - 10068.713: 9.9407% ( 130) 00:10:05.373 10068.713 - 10128.291: 11.2877% ( 150) 00:10:05.373 10128.291 - 10187.869: 12.7604% ( 164) 00:10:05.373 10187.869 - 10247.447: 14.3948% ( 182) 00:10:05.373 10247.447 - 10307.025: 16.0830% ( 188) 00:10:05.373 10307.025 - 10366.604: 17.5557% ( 164) 00:10:05.373 10366.604 - 10426.182: 19.1272% ( 175) 00:10:05.373 10426.182 - 10485.760: 20.7256% ( 178) 00:10:05.373 10485.760 - 10545.338: 22.5485% ( 203) 00:10:05.373 10545.338 - 10604.916: 24.3714% ( 203) 00:10:05.373 10604.916 - 10664.495: 26.3829% ( 224) 00:10:05.373 10664.495 - 10724.073: 28.4573% ( 231) 00:10:05.373 10724.073 - 10783.651: 30.5675% ( 235) 00:10:05.373 10783.651 - 10843.229: 32.7047% ( 238) 00:10:05.373 10843.229 - 10902.807: 34.8599% ( 240) 00:10:05.373 10902.807 - 10962.385: 37.1228% ( 252) 00:10:05.373 10962.385 - 11021.964: 39.5115% ( 266) 00:10:05.373 11021.964 - 11081.542: 41.8912% ( 265) 00:10:05.373 11081.542 - 11141.120: 44.1092% ( 247) 00:10:05.373 11141.120 - 11200.698: 46.2823% ( 242) 00:10:05.373 11200.698 - 11260.276: 48.5453% ( 252) 00:10:05.373 11260.276 - 11319.855: 50.8261% ( 254) 00:10:05.373 11319.855 - 11379.433: 53.1519% ( 259) 00:10:05.373 11379.433 - 11439.011: 55.3341% ( 243) 00:10:05.373 11439.011 - 11498.589: 57.5880% ( 251) 00:10:05.373 11498.589 - 11558.167: 59.7971% ( 246) 00:10:05.374 11558.167 - 11617.745: 62.1228% ( 259) 00:10:05.374 11617.745 - 11677.324: 64.2960% ( 242) 00:10:05.374 11677.324 - 11736.902: 66.4871% ( 244) 00:10:05.374 11736.902 - 11796.480: 68.5614% ( 231) 00:10:05.374 11796.480 - 11856.058: 70.6627% ( 234) 00:10:05.374 11856.058 - 11915.636: 72.5934% ( 215) 00:10:05.374 11915.636 - 11975.215: 74.6318% ( 227) 00:10:05.374 11975.215 - 12034.793: 76.5266% ( 211) 00:10:05.374 12034.793 - 12094.371: 78.2777% ( 195) 00:10:05.374 12094.371 - 12153.949: 79.9210% ( 183) 00:10:05.374 12153.949 - 12213.527: 81.4206% ( 167) 00:10:05.374 12213.527 - 12273.105: 82.9562% ( 171) 00:10:05.374 12273.105 - 12332.684: 84.3840% ( 159) 00:10:05.374 12332.684 - 12392.262: 85.6591% ( 142) 00:10:05.374 12392.262 - 12451.840: 86.8175% ( 129) 00:10:05.374 12451.840 - 12511.418: 87.8412% ( 114) 00:10:05.374 12511.418 - 12570.996: 88.8380% ( 111) 00:10:05.374 12570.996 - 12630.575: 89.8168% ( 109) 00:10:05.374 12630.575 - 12690.153: 90.7507% ( 104) 00:10:05.374 12690.153 - 12749.731: 91.5948% ( 94) 00:10:05.374 12749.731 - 12809.309: 92.3851% ( 88) 00:10:05.374 12809.309 - 12868.887: 93.0585% ( 75) 00:10:05.374 12868.887 - 12928.465: 93.6512% ( 66) 00:10:05.374 12928.465 - 12988.044: 94.1990% ( 61) 00:10:05.374 12988.044 - 13047.622: 94.6659% ( 52) 00:10:05.374 13047.622 - 13107.200: 95.0880% ( 47) 00:10:05.374 13107.200 - 13166.778: 95.4831% ( 44) 00:10:05.374 13166.778 - 13226.356: 95.7974% ( 35) 00:10:05.374 13226.356 - 13285.935: 96.0848% ( 32) 00:10:05.374 13285.935 - 13345.513: 96.3452% ( 29) 00:10:05.374 13345.513 - 13405.091: 96.5517% ( 23) 00:10:05.374 13405.091 - 13464.669: 96.7583% ( 23) 00:10:05.374 13464.669 - 13524.247: 96.9379% ( 20) 00:10:05.374 13524.247 - 13583.825: 97.0546% ( 13) 00:10:05.374 13583.825 - 13643.404: 97.1893% ( 15) 00:10:05.374 13643.404 - 13702.982: 97.3150% ( 14) 00:10:05.374 13702.982 - 13762.560: 97.4048% ( 10) 00:10:05.374 13762.560 - 13822.138: 97.4587% ( 6) 00:10:05.374 13822.138 - 13881.716: 97.5126% ( 6) 00:10:05.374 13881.716 - 13941.295: 97.5485% ( 4) 00:10:05.374 13941.295 - 14000.873: 97.5754% ( 3) 00:10:05.374 14000.873 - 14060.451: 97.6024% ( 3) 00:10:05.374 14060.451 - 14120.029: 97.6383% ( 4) 00:10:05.374 14120.029 - 14179.607: 97.6652% ( 3) 00:10:05.374 14179.607 - 14239.185: 97.6922% ( 3) 00:10:05.374 14239.185 - 14298.764: 97.7011% ( 1) 00:10:05.374 15966.953 - 16086.109: 97.7371% ( 4) 00:10:05.374 16086.109 - 16205.265: 97.7820% ( 5) 00:10:05.374 16205.265 - 16324.422: 97.8448% ( 7) 00:10:05.374 16324.422 - 16443.578: 97.9077% ( 7) 00:10:05.374 16443.578 - 16562.735: 97.9616% ( 6) 00:10:05.374 16562.735 - 16681.891: 98.0244% ( 7) 00:10:05.374 16681.891 - 16801.047: 98.0873% ( 7) 00:10:05.374 16801.047 - 16920.204: 98.1412% ( 6) 00:10:05.374 16920.204 - 17039.360: 98.1950% ( 6) 00:10:05.374 17039.360 - 17158.516: 98.2579% ( 7) 00:10:05.374 17158.516 - 17277.673: 98.3118% ( 6) 00:10:05.374 17277.673 - 17396.829: 98.3746% ( 7) 00:10:05.374 17396.829 - 17515.985: 98.4285% ( 6) 00:10:05.374 17515.985 - 17635.142: 98.4824% ( 6) 00:10:05.374 17635.142 - 17754.298: 98.5453% ( 7) 00:10:05.374 17754.298 - 17873.455: 98.5991% ( 6) 00:10:05.374 17873.455 - 17992.611: 98.6530% ( 6) 00:10:05.374 17992.611 - 18111.767: 98.7159% ( 7) 00:10:05.374 18111.767 - 18230.924: 98.7698% ( 6) 00:10:05.374 18230.924 - 18350.080: 98.8236% ( 6) 00:10:05.374 18350.080 - 18469.236: 98.8506% ( 3) 00:10:05.374 28240.058 - 28359.215: 98.9673% ( 13) 00:10:05.374 28359.215 - 28478.371: 99.1110% ( 16) 00:10:05.374 28478.371 - 28597.527: 99.1379% ( 3) 00:10:05.374 28597.527 - 28716.684: 99.1559% ( 2) 00:10:05.374 28716.684 - 28835.840: 99.1828% ( 3) 00:10:05.374 28835.840 - 28954.996: 99.2008% ( 2) 00:10:05.374 28954.996 - 29074.153: 99.2098% ( 1) 00:10:05.374 29074.153 - 29193.309: 99.2367% ( 3) 00:10:05.374 29193.309 - 29312.465: 99.2636% ( 3) 00:10:05.374 29312.465 - 29431.622: 99.2816% ( 2) 00:10:05.374 29431.622 - 29550.778: 99.2996% ( 2) 00:10:05.374 29550.778 - 29669.935: 99.3175% ( 2) 00:10:05.374 29669.935 - 29789.091: 99.3355% ( 2) 00:10:05.374 29789.091 - 29908.247: 99.3534% ( 2) 00:10:05.374 29908.247 - 30027.404: 99.3894% ( 4) 00:10:05.374 30027.404 - 30146.560: 99.4163% ( 3) 00:10:05.374 30146.560 - 30265.716: 99.4432% ( 3) 00:10:05.374 30265.716 - 30384.873: 99.4702% ( 3) 00:10:05.374 30384.873 - 30504.029: 99.4971% ( 3) 00:10:05.374 30504.029 - 30742.342: 99.5420% ( 5) 00:10:05.374 30742.342 - 30980.655: 99.6049% ( 7) 00:10:05.374 30980.655 - 31218.967: 99.6588% ( 6) 00:10:05.374 31218.967 - 31457.280: 99.7126% ( 6) 00:10:05.374 31457.280 - 31695.593: 99.7665% ( 6) 00:10:05.374 31695.593 - 31933.905: 99.8204% ( 6) 00:10:05.374 31933.905 - 32172.218: 99.8743% ( 6) 00:10:05.374 32172.218 - 32410.531: 99.9371% ( 7) 00:10:05.374 32410.531 - 32648.844: 99.9820% ( 5) 00:10:05.374 32648.844 - 32887.156: 100.0000% ( 2) 00:10:05.374 00:10:05.374 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:05.374 ============================================================================== 00:10:05.374 Range in us Cumulative IO count 00:10:05.374 8698.415 - 8757.993: 0.0359% ( 4) 00:10:05.374 8757.993 - 8817.571: 0.0629% ( 3) 00:10:05.374 8817.571 - 8877.149: 0.0988% ( 4) 00:10:05.374 8877.149 - 8936.727: 0.1347% ( 4) 00:10:05.374 8936.727 - 8996.305: 0.1616% ( 3) 00:10:05.374 8996.305 - 9055.884: 0.1976% ( 4) 00:10:05.374 9055.884 - 9115.462: 0.2335% ( 4) 00:10:05.374 9115.462 - 9175.040: 0.3053% ( 8) 00:10:05.374 9175.040 - 9234.618: 0.4580% ( 17) 00:10:05.374 9234.618 - 9294.196: 0.6555% ( 22) 00:10:05.374 9294.196 - 9353.775: 0.9159% ( 29) 00:10:05.374 9353.775 - 9413.353: 1.2123% ( 33) 00:10:05.374 9413.353 - 9472.931: 1.4817% ( 30) 00:10:05.374 9472.931 - 9532.509: 1.8229% ( 38) 00:10:05.374 9532.509 - 9592.087: 2.3527% ( 59) 00:10:05.374 9592.087 - 9651.665: 3.0801% ( 81) 00:10:05.374 9651.665 - 9711.244: 3.7626% ( 76) 00:10:05.374 9711.244 - 9770.822: 4.4720% ( 79) 00:10:05.374 9770.822 - 9830.400: 5.2443% ( 86) 00:10:05.374 9830.400 - 9889.978: 6.2320% ( 110) 00:10:05.374 9889.978 - 9949.556: 7.1839% ( 106) 00:10:05.374 9949.556 - 10009.135: 8.1717% ( 110) 00:10:05.374 10009.135 - 10068.713: 9.4379% ( 141) 00:10:05.374 10068.713 - 10128.291: 10.8387% ( 156) 00:10:05.374 10128.291 - 10187.869: 12.4102% ( 175) 00:10:05.374 10187.869 - 10247.447: 14.0625% ( 184) 00:10:05.374 10247.447 - 10307.025: 15.7238% ( 185) 00:10:05.374 10307.025 - 10366.604: 17.5736% ( 206) 00:10:05.374 10366.604 - 10426.182: 19.3068% ( 193) 00:10:05.374 10426.182 - 10485.760: 20.9860% ( 187) 00:10:05.374 10485.760 - 10545.338: 22.7460% ( 196) 00:10:05.374 10545.338 - 10604.916: 24.5510% ( 201) 00:10:05.374 10604.916 - 10664.495: 26.5266% ( 220) 00:10:05.374 10664.495 - 10724.073: 28.5022% ( 220) 00:10:05.374 10724.073 - 10783.651: 30.4867% ( 221) 00:10:05.374 10783.651 - 10843.229: 32.7496% ( 252) 00:10:05.374 10843.229 - 10902.807: 35.0036% ( 251) 00:10:05.374 10902.807 - 10962.385: 37.1318% ( 237) 00:10:05.374 10962.385 - 11021.964: 39.3139% ( 243) 00:10:05.374 11021.964 - 11081.542: 41.4691% ( 240) 00:10:05.374 11081.542 - 11141.120: 43.6602% ( 244) 00:10:05.374 11141.120 - 11200.698: 45.9770% ( 258) 00:10:05.374 11200.698 - 11260.276: 48.4016% ( 270) 00:10:05.374 11260.276 - 11319.855: 50.7184% ( 258) 00:10:05.374 11319.855 - 11379.433: 53.0891% ( 264) 00:10:05.374 11379.433 - 11439.011: 55.4149% ( 259) 00:10:05.374 11439.011 - 11498.589: 57.6688% ( 251) 00:10:05.374 11498.589 - 11558.167: 59.9587% ( 255) 00:10:05.374 11558.167 - 11617.745: 62.2486% ( 255) 00:10:05.374 11617.745 - 11677.324: 64.5205% ( 253) 00:10:05.374 11677.324 - 11736.902: 66.6577% ( 238) 00:10:05.374 11736.902 - 11796.480: 68.6961% ( 227) 00:10:05.374 11796.480 - 11856.058: 70.8064% ( 235) 00:10:05.374 11856.058 - 11915.636: 72.7640% ( 218) 00:10:05.374 11915.636 - 11975.215: 74.6947% ( 215) 00:10:05.374 11975.215 - 12034.793: 76.5356% ( 205) 00:10:05.374 12034.793 - 12094.371: 78.2866% ( 195) 00:10:05.374 12094.371 - 12153.949: 80.0108% ( 192) 00:10:05.374 12153.949 - 12213.527: 81.5733% ( 174) 00:10:05.374 12213.527 - 12273.105: 82.9113% ( 149) 00:10:05.374 12273.105 - 12332.684: 84.2942% ( 154) 00:10:05.374 12332.684 - 12392.262: 85.5873% ( 144) 00:10:05.374 12392.262 - 12451.840: 86.6828% ( 122) 00:10:05.374 12451.840 - 12511.418: 87.6976% ( 113) 00:10:05.374 12511.418 - 12570.996: 88.6853% ( 110) 00:10:05.374 12570.996 - 12630.575: 89.5654% ( 98) 00:10:05.374 12630.575 - 12690.153: 90.4634% ( 100) 00:10:05.374 12690.153 - 12749.731: 91.3165% ( 95) 00:10:05.374 12749.731 - 12809.309: 92.1246% ( 90) 00:10:05.374 12809.309 - 12868.887: 92.9508% ( 92) 00:10:05.374 12868.887 - 12928.465: 93.6243% ( 75) 00:10:05.374 12928.465 - 12988.044: 94.1541% ( 59) 00:10:05.374 12988.044 - 13047.622: 94.5941% ( 49) 00:10:05.374 13047.622 - 13107.200: 94.9713% ( 42) 00:10:05.374 13107.200 - 13166.778: 95.3215% ( 39) 00:10:05.374 13166.778 - 13226.356: 95.6358% ( 35) 00:10:05.374 13226.356 - 13285.935: 95.9591% ( 36) 00:10:05.374 13285.935 - 13345.513: 96.2464% ( 32) 00:10:05.374 13345.513 - 13405.091: 96.4799% ( 26) 00:10:05.374 13405.091 - 13464.669: 96.6685% ( 21) 00:10:05.374 13464.669 - 13524.247: 96.8481% ( 20) 00:10:05.374 13524.247 - 13583.825: 97.0007% ( 17) 00:10:05.374 13583.825 - 13643.404: 97.1444% ( 16) 00:10:05.374 13643.404 - 13702.982: 97.2791% ( 15) 00:10:05.374 13702.982 - 13762.560: 97.3599% ( 9) 00:10:05.374 13762.560 - 13822.138: 97.4318% ( 8) 00:10:05.374 13822.138 - 13881.716: 97.4767% ( 5) 00:10:05.374 13881.716 - 13941.295: 97.5305% ( 6) 00:10:05.375 13941.295 - 14000.873: 97.5844% ( 6) 00:10:05.375 14000.873 - 14060.451: 97.6293% ( 5) 00:10:05.375 14060.451 - 14120.029: 97.6562% ( 3) 00:10:05.375 14120.029 - 14179.607: 97.6832% ( 3) 00:10:05.375 14179.607 - 14239.185: 97.7011% ( 2) 00:10:05.375 16205.265 - 16324.422: 97.7460% ( 5) 00:10:05.375 16324.422 - 16443.578: 97.8628% ( 13) 00:10:05.375 16443.578 - 16562.735: 97.9616% ( 11) 00:10:05.375 16562.735 - 16681.891: 97.9975% ( 4) 00:10:05.375 16681.891 - 16801.047: 98.0424% ( 5) 00:10:05.375 16801.047 - 16920.204: 98.0873% ( 5) 00:10:05.375 16920.204 - 17039.360: 98.1501% ( 7) 00:10:05.375 17039.360 - 17158.516: 98.2130% ( 7) 00:10:05.375 17158.516 - 17277.673: 98.2759% ( 7) 00:10:05.375 17277.673 - 17396.829: 98.3297% ( 6) 00:10:05.375 17396.829 - 17515.985: 98.3836% ( 6) 00:10:05.375 17515.985 - 17635.142: 98.4465% ( 7) 00:10:05.375 17635.142 - 17754.298: 98.5004% ( 6) 00:10:05.375 17754.298 - 17873.455: 98.5632% ( 7) 00:10:05.375 17873.455 - 17992.611: 98.6171% ( 6) 00:10:05.375 17992.611 - 18111.767: 98.6800% ( 7) 00:10:05.375 18111.767 - 18230.924: 98.7428% ( 7) 00:10:05.375 18230.924 - 18350.080: 98.8057% ( 7) 00:10:05.375 18350.080 - 18469.236: 98.8506% ( 5) 00:10:05.375 26214.400 - 26333.556: 98.8865% ( 4) 00:10:05.375 26333.556 - 26452.713: 98.9404% ( 6) 00:10:05.375 26452.713 - 26571.869: 98.9763% ( 4) 00:10:05.375 26571.869 - 26691.025: 98.9943% ( 2) 00:10:05.375 26691.025 - 26810.182: 99.0212% ( 3) 00:10:05.375 26810.182 - 26929.338: 99.0481% ( 3) 00:10:05.375 26929.338 - 27048.495: 99.0661% ( 2) 00:10:05.375 27048.495 - 27167.651: 99.0930% ( 3) 00:10:05.375 27167.651 - 27286.807: 99.1200% ( 3) 00:10:05.375 27286.807 - 27405.964: 99.1469% ( 3) 00:10:05.375 27405.964 - 27525.120: 99.1739% ( 3) 00:10:05.375 27525.120 - 27644.276: 99.1918% ( 2) 00:10:05.375 27644.276 - 27763.433: 99.2188% ( 3) 00:10:05.375 27763.433 - 27882.589: 99.2457% ( 3) 00:10:05.375 27882.589 - 28001.745: 99.2726% ( 3) 00:10:05.375 28001.745 - 28120.902: 99.3085% ( 4) 00:10:05.375 28120.902 - 28240.058: 99.3355% ( 3) 00:10:05.375 28240.058 - 28359.215: 99.3624% ( 3) 00:10:05.375 28359.215 - 28478.371: 99.3894% ( 3) 00:10:05.375 28478.371 - 28597.527: 99.4163% ( 3) 00:10:05.375 28597.527 - 28716.684: 99.4432% ( 3) 00:10:05.375 28716.684 - 28835.840: 99.4702% ( 3) 00:10:05.375 28835.840 - 28954.996: 99.4971% ( 3) 00:10:05.375 28954.996 - 29074.153: 99.5241% ( 3) 00:10:05.375 29074.153 - 29193.309: 99.5510% ( 3) 00:10:05.375 29193.309 - 29312.465: 99.5779% ( 3) 00:10:05.375 29312.465 - 29431.622: 99.6139% ( 4) 00:10:05.375 29431.622 - 29550.778: 99.6408% ( 3) 00:10:05.375 29550.778 - 29669.935: 99.6767% ( 4) 00:10:05.375 29669.935 - 29789.091: 99.7126% ( 4) 00:10:05.375 29789.091 - 29908.247: 99.7396% ( 3) 00:10:05.375 29908.247 - 30027.404: 99.7665% ( 3) 00:10:05.375 30027.404 - 30146.560: 99.8024% ( 4) 00:10:05.375 30146.560 - 30265.716: 99.8294% ( 3) 00:10:05.375 30265.716 - 30384.873: 99.8653% ( 4) 00:10:05.375 30384.873 - 30504.029: 99.8922% ( 3) 00:10:05.375 30504.029 - 30742.342: 99.9551% ( 7) 00:10:05.375 30742.342 - 30980.655: 100.0000% ( 5) 00:10:05.375 00:10:05.375 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:05.375 ============================================================================== 00:10:05.375 Range in us Cumulative IO count 00:10:05.375 8877.149 - 8936.727: 0.0090% ( 1) 00:10:05.375 8936.727 - 8996.305: 0.0629% ( 6) 00:10:05.375 8996.305 - 9055.884: 0.0988% ( 4) 00:10:05.375 9055.884 - 9115.462: 0.1167% ( 2) 00:10:05.375 9115.462 - 9175.040: 0.1527% ( 4) 00:10:05.375 9175.040 - 9234.618: 0.2065% ( 6) 00:10:05.375 9234.618 - 9294.196: 0.2694% ( 7) 00:10:05.375 9294.196 - 9353.775: 0.4131% ( 16) 00:10:05.375 9353.775 - 9413.353: 0.7364% ( 36) 00:10:05.375 9413.353 - 9472.931: 1.0057% ( 30) 00:10:05.375 9472.931 - 9532.509: 1.4278% ( 47) 00:10:05.375 9532.509 - 9592.087: 1.8678% ( 49) 00:10:05.375 9592.087 - 9651.665: 2.4156% ( 61) 00:10:05.375 9651.665 - 9711.244: 2.9813% ( 63) 00:10:05.375 9711.244 - 9770.822: 3.5111% ( 59) 00:10:05.375 9770.822 - 9830.400: 4.2205% ( 79) 00:10:05.375 9830.400 - 9889.978: 5.1545% ( 104) 00:10:05.375 9889.978 - 9949.556: 6.2141% ( 118) 00:10:05.375 9949.556 - 10009.135: 7.4264% ( 135) 00:10:05.375 10009.135 - 10068.713: 8.9260% ( 167) 00:10:05.375 10068.713 - 10128.291: 10.6142% ( 188) 00:10:05.375 10128.291 - 10187.869: 12.1857% ( 175) 00:10:05.375 10187.869 - 10247.447: 13.7931% ( 179) 00:10:05.375 10247.447 - 10307.025: 15.3107% ( 169) 00:10:05.375 10307.025 - 10366.604: 16.9271% ( 180) 00:10:05.375 10366.604 - 10426.182: 18.6063% ( 187) 00:10:05.375 10426.182 - 10485.760: 20.3305% ( 192) 00:10:05.375 10485.760 - 10545.338: 22.2342% ( 212) 00:10:05.375 10545.338 - 10604.916: 24.2277% ( 222) 00:10:05.375 10604.916 - 10664.495: 26.3290% ( 234) 00:10:05.375 10664.495 - 10724.073: 28.5022% ( 242) 00:10:05.375 10724.073 - 10783.651: 30.5765% ( 231) 00:10:05.375 10783.651 - 10843.229: 32.6509% ( 231) 00:10:05.375 10843.229 - 10902.807: 34.5995% ( 217) 00:10:05.375 10902.807 - 10962.385: 36.5661% ( 219) 00:10:05.375 10962.385 - 11021.964: 38.7302% ( 241) 00:10:05.375 11021.964 - 11081.542: 41.0111% ( 254) 00:10:05.375 11081.542 - 11141.120: 43.3818% ( 264) 00:10:05.375 11141.120 - 11200.698: 45.8154% ( 271) 00:10:05.375 11200.698 - 11260.276: 48.2399% ( 270) 00:10:05.375 11260.276 - 11319.855: 50.6017% ( 263) 00:10:05.375 11319.855 - 11379.433: 52.9454% ( 261) 00:10:05.375 11379.433 - 11439.011: 55.2443% ( 256) 00:10:05.375 11439.011 - 11498.589: 57.5970% ( 262) 00:10:05.375 11498.589 - 11558.167: 59.8509% ( 251) 00:10:05.375 11558.167 - 11617.745: 62.1408% ( 255) 00:10:05.375 11617.745 - 11677.324: 64.3858% ( 250) 00:10:05.375 11677.324 - 11736.902: 66.6218% ( 249) 00:10:05.375 11736.902 - 11796.480: 68.7680% ( 239) 00:10:05.375 11796.480 - 11856.058: 70.8693% ( 234) 00:10:05.375 11856.058 - 11915.636: 72.8628% ( 222) 00:10:05.375 11915.636 - 11975.215: 74.8204% ( 218) 00:10:05.375 11975.215 - 12034.793: 76.6254% ( 201) 00:10:05.375 12034.793 - 12094.371: 78.3226% ( 189) 00:10:05.375 12094.371 - 12153.949: 79.9479% ( 181) 00:10:05.375 12153.949 - 12213.527: 81.5104% ( 174) 00:10:05.375 12213.527 - 12273.105: 82.8933% ( 154) 00:10:05.375 12273.105 - 12332.684: 84.2852% ( 155) 00:10:05.375 12332.684 - 12392.262: 85.4705% ( 132) 00:10:05.375 12392.262 - 12451.840: 86.7457% ( 142) 00:10:05.375 12451.840 - 12511.418: 87.8682% ( 125) 00:10:05.375 12511.418 - 12570.996: 88.8290% ( 107) 00:10:05.375 12570.996 - 12630.575: 89.7809% ( 106) 00:10:05.375 12630.575 - 12690.153: 90.6609% ( 98) 00:10:05.375 12690.153 - 12749.731: 91.4242% ( 85) 00:10:05.375 12749.731 - 12809.309: 92.1067% ( 76) 00:10:05.375 12809.309 - 12868.887: 92.7173% ( 68) 00:10:05.375 12868.887 - 12928.465: 93.2561% ( 60) 00:10:05.375 12928.465 - 12988.044: 93.7680% ( 57) 00:10:05.375 12988.044 - 13047.622: 94.2170% ( 50) 00:10:05.375 13047.622 - 13107.200: 94.6659% ( 50) 00:10:05.375 13107.200 - 13166.778: 95.0970% ( 48) 00:10:05.375 13166.778 - 13226.356: 95.4741% ( 42) 00:10:05.375 13226.356 - 13285.935: 95.8423% ( 41) 00:10:05.375 13285.935 - 13345.513: 96.1117% ( 30) 00:10:05.375 13345.513 - 13405.091: 96.3721% ( 29) 00:10:05.375 13405.091 - 13464.669: 96.6056% ( 26) 00:10:05.375 13464.669 - 13524.247: 96.7672% ( 18) 00:10:05.375 13524.247 - 13583.825: 96.8840% ( 13) 00:10:05.375 13583.825 - 13643.404: 96.9738% ( 10) 00:10:05.375 13643.404 - 13702.982: 97.0546% ( 9) 00:10:05.375 13702.982 - 13762.560: 97.0995% ( 5) 00:10:05.375 13762.560 - 13822.138: 97.1534% ( 6) 00:10:05.375 13822.138 - 13881.716: 97.2073% ( 6) 00:10:05.375 13881.716 - 13941.295: 97.2522% ( 5) 00:10:05.375 13941.295 - 14000.873: 97.2971% ( 5) 00:10:05.376 14000.873 - 14060.451: 97.3509% ( 6) 00:10:05.376 14060.451 - 14120.029: 97.3958% ( 5) 00:10:05.376 14120.029 - 14179.607: 97.4587% ( 7) 00:10:05.376 14179.607 - 14239.185: 97.5036% ( 5) 00:10:05.376 14239.185 - 14298.764: 97.5575% ( 6) 00:10:05.376 14298.764 - 14358.342: 97.6114% ( 6) 00:10:05.376 14358.342 - 14417.920: 97.6473% ( 4) 00:10:05.376 14417.920 - 14477.498: 97.6652% ( 2) 00:10:05.376 14477.498 - 14537.076: 97.6922% ( 3) 00:10:05.376 14537.076 - 14596.655: 97.7011% ( 1) 00:10:05.376 15490.327 - 15609.484: 97.8089% ( 12) 00:10:05.376 15609.484 - 15728.640: 97.9077% ( 11) 00:10:05.376 15728.640 - 15847.796: 97.9526% ( 5) 00:10:05.376 15847.796 - 15966.953: 97.9885% ( 4) 00:10:05.376 15966.953 - 16086.109: 98.0244% ( 4) 00:10:05.376 16086.109 - 16205.265: 98.0783% ( 6) 00:10:05.376 16205.265 - 16324.422: 98.1232% ( 5) 00:10:05.376 16324.422 - 16443.578: 98.1771% ( 6) 00:10:05.376 16443.578 - 16562.735: 98.2220% ( 5) 00:10:05.376 16562.735 - 16681.891: 98.2669% ( 5) 00:10:05.376 16681.891 - 16801.047: 98.3118% ( 5) 00:10:05.376 16801.047 - 16920.204: 98.3567% ( 5) 00:10:05.376 16920.204 - 17039.360: 98.4016% ( 5) 00:10:05.376 17039.360 - 17158.516: 98.4555% ( 6) 00:10:05.376 17158.516 - 17277.673: 98.5093% ( 6) 00:10:05.376 17277.673 - 17396.829: 98.5542% ( 5) 00:10:05.376 17396.829 - 17515.985: 98.6081% ( 6) 00:10:05.376 17515.985 - 17635.142: 98.6620% ( 6) 00:10:05.376 17635.142 - 17754.298: 98.7159% ( 6) 00:10:05.376 17754.298 - 17873.455: 98.7698% ( 6) 00:10:05.376 17873.455 - 17992.611: 98.8236% ( 6) 00:10:05.376 17992.611 - 18111.767: 98.8506% ( 3) 00:10:05.376 25022.836 - 25141.993: 98.9404% ( 10) 00:10:05.376 25141.993 - 25261.149: 99.0661% ( 14) 00:10:05.376 25261.149 - 25380.305: 99.1200% ( 6) 00:10:05.376 25380.305 - 25499.462: 99.1469% ( 3) 00:10:05.376 25499.462 - 25618.618: 99.1739% ( 3) 00:10:05.376 25618.618 - 25737.775: 99.2008% ( 3) 00:10:05.376 25737.775 - 25856.931: 99.2277% ( 3) 00:10:05.376 25856.931 - 25976.087: 99.2547% ( 3) 00:10:05.376 25976.087 - 26095.244: 99.2816% ( 3) 00:10:05.376 26095.244 - 26214.400: 99.3085% ( 3) 00:10:05.376 26214.400 - 26333.556: 99.3355% ( 3) 00:10:05.376 26333.556 - 26452.713: 99.3624% ( 3) 00:10:05.376 26452.713 - 26571.869: 99.3894% ( 3) 00:10:05.376 26571.869 - 26691.025: 99.4253% ( 4) 00:10:05.376 26691.025 - 26810.182: 99.4522% ( 3) 00:10:05.376 26810.182 - 26929.338: 99.4792% ( 3) 00:10:05.376 26929.338 - 27048.495: 99.5061% ( 3) 00:10:05.376 27048.495 - 27167.651: 99.5330% ( 3) 00:10:05.376 27167.651 - 27286.807: 99.5600% ( 3) 00:10:05.376 27286.807 - 27405.964: 99.5869% ( 3) 00:10:05.376 27405.964 - 27525.120: 99.6228% ( 4) 00:10:05.376 27525.120 - 27644.276: 99.6498% ( 3) 00:10:05.376 27644.276 - 27763.433: 99.6857% ( 4) 00:10:05.376 27763.433 - 27882.589: 99.7126% ( 3) 00:10:05.376 27882.589 - 28001.745: 99.7486% ( 4) 00:10:05.376 28001.745 - 28120.902: 99.7755% ( 3) 00:10:05.376 28120.902 - 28240.058: 99.8024% ( 3) 00:10:05.376 28240.058 - 28359.215: 99.8294% ( 3) 00:10:05.376 28359.215 - 28478.371: 99.8563% ( 3) 00:10:05.376 28478.371 - 28597.527: 99.8922% ( 4) 00:10:05.376 28597.527 - 28716.684: 99.9192% ( 3) 00:10:05.376 28716.684 - 28835.840: 99.9551% ( 4) 00:10:05.376 28835.840 - 28954.996: 99.9820% ( 3) 00:10:05.376 28954.996 - 29074.153: 100.0000% ( 2) 00:10:05.376 00:10:05.376 09:47:59 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:05.376 00:10:05.376 real 0m2.844s 00:10:05.376 user 0m2.484s 00:10:05.376 sys 0m0.261s 00:10:05.376 09:47:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.376 ************************************ 00:10:05.376 END TEST nvme_perf 00:10:05.376 ************************************ 00:10:05.376 09:47:59 -- common/autotest_common.sh@10 -- # set +x 00:10:05.376 09:47:59 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:05.376 09:47:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:05.376 09:47:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:05.376 09:47:59 -- common/autotest_common.sh@10 -- # set +x 00:10:05.376 ************************************ 00:10:05.376 START TEST nvme_hello_world 00:10:05.376 ************************************ 00:10:05.376 09:47:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:05.943 Initializing NVMe Controllers 00:10:05.943 Attached to 0000:00:06.0 00:10:05.943 Namespace ID: 1 size: 6GB 00:10:05.943 Attached to 0000:00:07.0 00:10:05.943 Namespace ID: 1 size: 5GB 00:10:05.943 Attached to 0000:00:09.0 00:10:05.943 Namespace ID: 1 size: 1GB 00:10:05.943 Attached to 0000:00:08.0 00:10:05.943 Namespace ID: 1 size: 4GB 00:10:05.943 Namespace ID: 2 size: 4GB 00:10:05.943 Namespace ID: 3 size: 4GB 00:10:05.943 Initialization complete. 00:10:05.943 INFO: using host memory buffer for IO 00:10:05.943 Hello world! 00:10:05.943 INFO: using host memory buffer for IO 00:10:05.943 Hello world! 00:10:05.943 INFO: using host memory buffer for IO 00:10:05.943 Hello world! 00:10:05.943 INFO: using host memory buffer for IO 00:10:05.943 Hello world! 00:10:05.943 INFO: using host memory buffer for IO 00:10:05.943 Hello world! 00:10:05.943 INFO: using host memory buffer for IO 00:10:05.943 Hello world! 00:10:05.943 00:10:05.943 real 0m0.394s 00:10:05.943 user 0m0.222s 00:10:05.943 sys 0m0.124s 00:10:05.943 09:47:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.943 09:47:59 -- common/autotest_common.sh@10 -- # set +x 00:10:05.943 ************************************ 00:10:05.943 END TEST nvme_hello_world 00:10:05.943 ************************************ 00:10:05.943 09:47:59 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:05.943 09:47:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:05.943 09:47:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:05.943 09:47:59 -- common/autotest_common.sh@10 -- # set +x 00:10:05.943 ************************************ 00:10:05.943 START TEST nvme_sgl 00:10:05.943 ************************************ 00:10:05.943 09:47:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:06.202 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:10:06.202 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:10:06.202 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:10:06.202 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:10:06.202 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:10:06.202 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:10:06.202 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:10:06.202 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:10:06.202 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:10:06.461 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:10:06.461 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:10:06.461 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:10:06.461 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:10:06.461 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:10:06.461 NVMe Readv/Writev Request test 00:10:06.461 Attached to 0000:00:06.0 00:10:06.461 Attached to 0000:00:07.0 00:10:06.461 Attached to 0000:00:09.0 00:10:06.461 Attached to 0000:00:08.0 00:10:06.461 0000:00:06.0: build_io_request_2 test passed 00:10:06.461 0000:00:06.0: build_io_request_4 test passed 00:10:06.461 0000:00:06.0: build_io_request_5 test passed 00:10:06.461 0000:00:06.0: build_io_request_6 test passed 00:10:06.461 0000:00:06.0: build_io_request_7 test passed 00:10:06.461 0000:00:06.0: build_io_request_10 test passed 00:10:06.461 0000:00:07.0: build_io_request_2 test passed 00:10:06.461 0000:00:07.0: build_io_request_4 test passed 00:10:06.461 0000:00:07.0: build_io_request_5 test passed 00:10:06.461 0000:00:07.0: build_io_request_6 test passed 00:10:06.461 0000:00:07.0: build_io_request_7 test passed 00:10:06.461 0000:00:07.0: build_io_request_10 test passed 00:10:06.461 Cleaning up... 00:10:06.461 00:10:06.461 real 0m0.511s 00:10:06.461 user 0m0.342s 00:10:06.461 sys 0m0.126s 00:10:06.461 09:48:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.461 ************************************ 00:10:06.461 END TEST nvme_sgl 00:10:06.461 ************************************ 00:10:06.461 09:48:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.461 09:48:00 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:06.461 09:48:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:06.461 09:48:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:06.461 09:48:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.461 ************************************ 00:10:06.461 START TEST nvme_e2edp 00:10:06.461 ************************************ 00:10:06.461 09:48:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:06.720 NVMe Write/Read with End-to-End data protection test 00:10:06.720 Attached to 0000:00:06.0 00:10:06.720 Attached to 0000:00:07.0 00:10:06.720 Attached to 0000:00:09.0 00:10:06.720 Attached to 0000:00:08.0 00:10:06.720 Cleaning up... 00:10:06.720 00:10:06.720 real 0m0.272s 00:10:06.720 user 0m0.098s 00:10:06.720 sys 0m0.129s 00:10:06.720 09:48:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.720 ************************************ 00:10:06.720 END TEST nvme_e2edp 00:10:06.720 ************************************ 00:10:06.720 09:48:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.720 09:48:00 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:06.720 09:48:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:06.720 09:48:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:06.720 09:48:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.720 ************************************ 00:10:06.720 START TEST nvme_reserve 00:10:06.720 ************************************ 00:10:06.720 09:48:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:06.979 ===================================================== 00:10:06.979 NVMe Controller at PCI bus 0, device 6, function 0 00:10:06.979 ===================================================== 00:10:06.979 Reservations: Not Supported 00:10:06.979 ===================================================== 00:10:06.979 NVMe Controller at PCI bus 0, device 7, function 0 00:10:06.979 ===================================================== 00:10:06.979 Reservations: Not Supported 00:10:06.979 ===================================================== 00:10:06.979 NVMe Controller at PCI bus 0, device 9, function 0 00:10:06.979 ===================================================== 00:10:06.979 Reservations: Not Supported 00:10:06.979 ===================================================== 00:10:06.979 NVMe Controller at PCI bus 0, device 8, function 0 00:10:06.979 ===================================================== 00:10:06.979 Reservations: Not Supported 00:10:06.979 Reservation test passed 00:10:06.979 00:10:06.979 real 0m0.280s 00:10:06.979 user 0m0.093s 00:10:06.979 sys 0m0.136s 00:10:06.979 09:48:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.979 09:48:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.979 ************************************ 00:10:06.979 END TEST nvme_reserve 00:10:06.979 ************************************ 00:10:06.979 09:48:00 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:06.979 09:48:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:06.979 09:48:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:06.979 09:48:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.979 ************************************ 00:10:06.979 START TEST nvme_err_injection 00:10:06.979 ************************************ 00:10:06.979 09:48:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:07.545 NVMe Error Injection test 00:10:07.545 Attached to 0000:00:06.0 00:10:07.545 Attached to 0000:00:07.0 00:10:07.545 Attached to 0000:00:09.0 00:10:07.545 Attached to 0000:00:08.0 00:10:07.545 0000:00:09.0: get features failed as expected 00:10:07.545 0000:00:08.0: get features failed as expected 00:10:07.545 0000:00:06.0: get features failed as expected 00:10:07.545 0000:00:07.0: get features failed as expected 00:10:07.545 0000:00:08.0: get features successfully as expected 00:10:07.545 0000:00:06.0: get features successfully as expected 00:10:07.545 0000:00:07.0: get features successfully as expected 00:10:07.545 0000:00:09.0: get features successfully as expected 00:10:07.545 0000:00:06.0: read failed as expected 00:10:07.545 0000:00:07.0: read failed as expected 00:10:07.545 0000:00:09.0: read failed as expected 00:10:07.545 0000:00:08.0: read failed as expected 00:10:07.545 0000:00:06.0: read successfully as expected 00:10:07.545 0000:00:07.0: read successfully as expected 00:10:07.545 0000:00:09.0: read successfully as expected 00:10:07.545 0000:00:08.0: read successfully as expected 00:10:07.545 Cleaning up... 00:10:07.545 00:10:07.545 real 0m0.344s 00:10:07.545 user 0m0.172s 00:10:07.545 sys 0m0.125s 00:10:07.545 09:48:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.545 ************************************ 00:10:07.545 END TEST nvme_err_injection 00:10:07.545 ************************************ 00:10:07.545 09:48:01 -- common/autotest_common.sh@10 -- # set +x 00:10:07.545 09:48:01 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:07.545 09:48:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:07.545 09:48:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.545 09:48:01 -- common/autotest_common.sh@10 -- # set +x 00:10:07.545 ************************************ 00:10:07.545 START TEST nvme_overhead 00:10:07.545 ************************************ 00:10:07.545 09:48:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:08.921 Initializing NVMe Controllers 00:10:08.921 Attached to 0000:00:06.0 00:10:08.921 Attached to 0000:00:07.0 00:10:08.921 Attached to 0000:00:09.0 00:10:08.921 Attached to 0000:00:08.0 00:10:08.921 Initialization complete. Launching workers. 00:10:08.921 submit (in ns) avg, min, max = 15601.5, 12898.2, 83475.9 00:10:08.921 complete (in ns) avg, min, max = 10424.8, 9274.5, 89452.3 00:10:08.921 00:10:08.921 Submit histogram 00:10:08.921 ================ 00:10:08.921 Range in us Cumulative Count 00:10:08.921 12.858 - 12.916: 0.0179% ( 2) 00:10:08.921 13.149 - 13.207: 0.0269% ( 1) 00:10:08.921 13.964 - 14.022: 0.0358% ( 1) 00:10:08.921 14.138 - 14.196: 0.0448% ( 1) 00:10:08.921 14.196 - 14.255: 0.0717% ( 3) 00:10:08.921 14.255 - 14.313: 0.2240% ( 17) 00:10:08.921 14.313 - 14.371: 0.4479% ( 25) 00:10:08.921 14.371 - 14.429: 0.6629% ( 24) 00:10:08.921 14.429 - 14.487: 0.9585% ( 33) 00:10:08.921 14.487 - 14.545: 1.1735% ( 24) 00:10:08.921 14.545 - 14.604: 1.8543% ( 76) 00:10:08.921 14.604 - 14.662: 3.8072% ( 218) 00:10:08.921 14.662 - 14.720: 10.7229% ( 772) 00:10:08.921 14.720 - 14.778: 24.6977% ( 1560) 00:10:08.921 14.778 - 14.836: 42.7484% ( 2015) 00:10:08.921 14.836 - 14.895: 58.4162% ( 1749) 00:10:08.921 14.895 - 15.011: 74.2453% ( 1767) 00:10:08.921 15.011 - 15.127: 78.8946% ( 519) 00:10:08.921 15.127 - 15.244: 82.4062% ( 392) 00:10:08.921 15.244 - 15.360: 86.0432% ( 406) 00:10:08.921 15.360 - 15.476: 88.1931% ( 240) 00:10:08.921 15.476 - 15.593: 89.1517% ( 107) 00:10:08.921 15.593 - 15.709: 89.7250% ( 64) 00:10:08.921 15.709 - 15.825: 89.9489% ( 25) 00:10:08.921 15.825 - 15.942: 90.2356% ( 32) 00:10:08.921 15.942 - 16.058: 90.3968% ( 18) 00:10:08.921 16.058 - 16.175: 90.5850% ( 21) 00:10:08.921 16.175 - 16.291: 90.9702% ( 43) 00:10:08.921 16.291 - 16.407: 91.2210% ( 28) 00:10:08.921 16.407 - 16.524: 91.4270% ( 23) 00:10:08.921 16.524 - 16.640: 91.5704% ( 16) 00:10:08.921 16.640 - 16.756: 91.6689% ( 11) 00:10:08.921 16.756 - 16.873: 91.7227% ( 6) 00:10:08.921 16.873 - 16.989: 91.7764% ( 6) 00:10:08.921 16.989 - 17.105: 91.8839% ( 12) 00:10:08.921 17.105 - 17.222: 91.9287% ( 5) 00:10:08.921 17.222 - 17.338: 92.0093% ( 9) 00:10:08.921 17.338 - 17.455: 92.0183% ( 1) 00:10:08.921 17.455 - 17.571: 92.0631% ( 5) 00:10:08.921 17.571 - 17.687: 92.0720% ( 1) 00:10:08.921 17.687 - 17.804: 92.0899% ( 2) 00:10:08.921 17.804 - 17.920: 92.0989% ( 1) 00:10:08.921 17.920 - 18.036: 92.1526% ( 6) 00:10:08.921 18.036 - 18.153: 92.1616% ( 1) 00:10:08.921 18.153 - 18.269: 92.1795% ( 2) 00:10:08.921 18.269 - 18.385: 92.2154% ( 4) 00:10:08.921 18.502 - 18.618: 92.2243% ( 1) 00:10:08.921 18.618 - 18.735: 92.2691% ( 5) 00:10:08.921 18.735 - 18.851: 92.2870% ( 2) 00:10:08.921 18.851 - 18.967: 92.3139% ( 3) 00:10:08.921 18.967 - 19.084: 92.3318% ( 2) 00:10:08.921 19.084 - 19.200: 92.3856% ( 6) 00:10:08.921 19.200 - 19.316: 92.4483% ( 7) 00:10:08.921 19.316 - 19.433: 92.6453% ( 22) 00:10:08.921 19.433 - 19.549: 92.7887% ( 16) 00:10:08.921 19.549 - 19.665: 93.0574% ( 30) 00:10:08.921 19.665 - 19.782: 93.2366% ( 20) 00:10:08.921 19.782 - 19.898: 93.2993% ( 7) 00:10:08.921 19.898 - 20.015: 93.3799% ( 9) 00:10:08.921 20.015 - 20.131: 93.4785% ( 11) 00:10:08.921 20.131 - 20.247: 93.5501% ( 8) 00:10:08.921 20.247 - 20.364: 93.6128% ( 7) 00:10:08.921 20.364 - 20.480: 93.7024% ( 10) 00:10:08.921 20.480 - 20.596: 93.7830% ( 9) 00:10:08.921 20.596 - 20.713: 93.8905% ( 12) 00:10:08.921 20.713 - 20.829: 94.0070% ( 13) 00:10:08.921 20.829 - 20.945: 94.1682% ( 18) 00:10:08.921 20.945 - 21.062: 94.3384% ( 19) 00:10:08.921 21.062 - 21.178: 94.5893% ( 28) 00:10:08.921 21.178 - 21.295: 94.7505% ( 18) 00:10:08.921 21.295 - 21.411: 94.8849% ( 15) 00:10:08.921 21.411 - 21.527: 95.0820% ( 22) 00:10:08.921 21.527 - 21.644: 95.3238% ( 27) 00:10:08.921 21.644 - 21.760: 95.4672% ( 16) 00:10:08.921 21.760 - 21.876: 95.6284% ( 18) 00:10:08.921 21.876 - 21.993: 95.7897% ( 18) 00:10:08.921 21.993 - 22.109: 95.9957% ( 23) 00:10:08.921 22.109 - 22.225: 96.1569% ( 18) 00:10:08.921 22.225 - 22.342: 96.2465% ( 10) 00:10:08.921 22.342 - 22.458: 96.3988% ( 17) 00:10:08.921 22.458 - 22.575: 96.5242% ( 14) 00:10:08.921 22.575 - 22.691: 96.6407% ( 13) 00:10:08.921 22.691 - 22.807: 96.7482% ( 12) 00:10:08.921 22.807 - 22.924: 96.8109% ( 7) 00:10:08.921 22.924 - 23.040: 96.9453% ( 15) 00:10:08.921 23.040 - 23.156: 97.0617% ( 13) 00:10:08.921 23.156 - 23.273: 97.1423% ( 9) 00:10:08.921 23.273 - 23.389: 97.2319% ( 10) 00:10:08.921 23.389 - 23.505: 97.2857% ( 6) 00:10:08.921 23.505 - 23.622: 97.3573% ( 8) 00:10:08.921 23.622 - 23.738: 97.4380% ( 9) 00:10:08.921 23.738 - 23.855: 97.5544% ( 13) 00:10:08.921 23.855 - 23.971: 97.6888% ( 15) 00:10:08.921 23.971 - 24.087: 97.7425% ( 6) 00:10:08.921 24.087 - 24.204: 97.7963% ( 6) 00:10:08.921 24.204 - 24.320: 97.8859% ( 10) 00:10:08.921 24.320 - 24.436: 97.9755% ( 10) 00:10:08.921 24.436 - 24.553: 98.1009% ( 14) 00:10:08.921 24.553 - 24.669: 98.1815% ( 9) 00:10:08.921 24.669 - 24.785: 98.2442% ( 7) 00:10:08.921 24.785 - 24.902: 98.3338% ( 10) 00:10:08.921 24.902 - 25.018: 98.3786% ( 5) 00:10:08.921 25.018 - 25.135: 98.4234% ( 5) 00:10:08.921 25.135 - 25.251: 98.4592% ( 4) 00:10:08.921 25.251 - 25.367: 98.4861% ( 3) 00:10:08.921 25.367 - 25.484: 98.5040% ( 2) 00:10:08.921 25.484 - 25.600: 98.5219% ( 2) 00:10:08.921 25.600 - 25.716: 98.5577% ( 4) 00:10:08.921 25.716 - 25.833: 98.5936% ( 4) 00:10:08.921 25.833 - 25.949: 98.6115% ( 2) 00:10:08.921 25.949 - 26.065: 98.6384% ( 3) 00:10:08.921 26.065 - 26.182: 98.6563% ( 2) 00:10:08.921 26.182 - 26.298: 98.6652% ( 1) 00:10:08.921 26.298 - 26.415: 98.6921% ( 3) 00:10:08.921 26.415 - 26.531: 98.7369% ( 5) 00:10:08.921 26.531 - 26.647: 98.7548% ( 2) 00:10:08.921 26.764 - 26.880: 98.7817% ( 3) 00:10:08.921 26.880 - 26.996: 98.7996% ( 2) 00:10:08.921 26.996 - 27.113: 98.8086% ( 1) 00:10:08.921 27.113 - 27.229: 98.8265% ( 2) 00:10:08.921 27.229 - 27.345: 98.8534% ( 3) 00:10:08.921 27.345 - 27.462: 98.8623% ( 1) 00:10:08.921 27.462 - 27.578: 98.8713% ( 1) 00:10:08.921 27.811 - 27.927: 98.8892% ( 2) 00:10:08.921 28.276 - 28.393: 98.8981% ( 1) 00:10:08.921 28.509 - 28.625: 98.9071% ( 1) 00:10:08.921 28.858 - 28.975: 98.9161% ( 1) 00:10:08.921 28.975 - 29.091: 98.9250% ( 1) 00:10:08.921 29.091 - 29.207: 98.9429% ( 2) 00:10:08.921 29.207 - 29.324: 99.0146% ( 8) 00:10:08.921 29.324 - 29.440: 99.1311% ( 13) 00:10:08.921 29.440 - 29.556: 99.2206% ( 10) 00:10:08.921 29.556 - 29.673: 99.4088% ( 21) 00:10:08.921 29.673 - 29.789: 99.5431% ( 15) 00:10:08.921 29.789 - 30.022: 99.6148% ( 8) 00:10:08.921 30.022 - 30.255: 99.6685% ( 6) 00:10:08.921 30.255 - 30.487: 99.6775% ( 1) 00:10:08.921 30.720 - 30.953: 99.7044% ( 3) 00:10:08.921 30.953 - 31.185: 99.7313% ( 3) 00:10:08.921 31.185 - 31.418: 99.7402% ( 1) 00:10:08.921 31.884 - 32.116: 99.7492% ( 1) 00:10:08.921 32.116 - 32.349: 99.7671% ( 2) 00:10:08.921 32.349 - 32.582: 99.7940% ( 3) 00:10:08.921 32.582 - 32.815: 99.8029% ( 1) 00:10:08.921 32.815 - 33.047: 99.8119% ( 1) 00:10:08.921 33.280 - 33.513: 99.8208% ( 1) 00:10:08.921 33.745 - 33.978: 99.8298% ( 1) 00:10:08.921 33.978 - 34.211: 99.8388% ( 1) 00:10:08.921 34.444 - 34.676: 99.8477% ( 1) 00:10:08.921 35.142 - 35.375: 99.8567% ( 1) 00:10:08.921 35.375 - 35.607: 99.8656% ( 1) 00:10:08.921 36.073 - 36.305: 99.8746% ( 1) 00:10:08.921 36.538 - 36.771: 99.8835% ( 1) 00:10:08.921 37.702 - 37.935: 99.8925% ( 1) 00:10:08.921 38.167 - 38.400: 99.9104% ( 2) 00:10:08.921 38.865 - 39.098: 99.9194% ( 1) 00:10:08.921 39.796 - 40.029: 99.9283% ( 1) 00:10:08.921 40.029 - 40.262: 99.9373% ( 1) 00:10:08.921 42.589 - 42.822: 99.9552% ( 2) 00:10:08.921 43.055 - 43.287: 99.9642% ( 1) 00:10:08.921 56.785 - 57.018: 99.9731% ( 1) 00:10:08.921 80.989 - 81.455: 99.9821% ( 1) 00:10:08.921 81.920 - 82.385: 99.9910% ( 1) 00:10:08.921 83.316 - 83.782: 100.0000% ( 1) 00:10:08.921 00:10:08.921 Complete histogram 00:10:08.921 ================== 00:10:08.921 Range in us Cumulative Count 00:10:08.921 9.251 - 9.309: 0.0090% ( 1) 00:10:08.921 9.367 - 9.425: 0.0448% ( 4) 00:10:08.921 9.425 - 9.484: 0.2508% ( 23) 00:10:08.922 9.484 - 9.542: 0.4837% ( 26) 00:10:08.922 9.542 - 9.600: 0.7346% ( 28) 00:10:08.922 9.600 - 9.658: 0.9048% ( 19) 00:10:08.922 9.658 - 9.716: 3.2160% ( 258) 00:10:08.922 9.716 - 9.775: 13.8672% ( 1189) 00:10:08.922 9.775 - 9.833: 33.1004% ( 2147) 00:10:08.922 9.833 - 9.891: 52.3874% ( 2153) 00:10:08.922 9.891 - 9.949: 66.1292% ( 1534) 00:10:08.922 9.949 - 10.007: 74.8544% ( 974) 00:10:08.922 10.007 - 10.065: 79.7277% ( 544) 00:10:08.922 10.065 - 10.124: 81.9672% ( 250) 00:10:08.922 10.124 - 10.182: 83.4543% ( 166) 00:10:08.922 10.182 - 10.240: 84.2695% ( 91) 00:10:08.922 10.240 - 10.298: 84.7353% ( 52) 00:10:08.922 10.298 - 10.356: 85.1474% ( 46) 00:10:08.922 10.356 - 10.415: 85.4788% ( 37) 00:10:08.922 10.415 - 10.473: 85.7834% ( 34) 00:10:08.922 10.473 - 10.531: 86.0880% ( 34) 00:10:08.922 10.531 - 10.589: 86.5000% ( 46) 00:10:08.922 10.589 - 10.647: 87.2077% ( 79) 00:10:08.922 10.647 - 10.705: 88.1215% ( 102) 00:10:08.922 10.705 - 10.764: 89.2502% ( 126) 00:10:08.922 10.764 - 10.822: 90.5671% ( 147) 00:10:08.922 10.822 - 10.880: 91.5345% ( 108) 00:10:08.922 10.880 - 10.938: 92.2243% ( 77) 00:10:08.922 10.938 - 10.996: 92.4841% ( 29) 00:10:08.922 10.996 - 11.055: 92.8156% ( 37) 00:10:08.922 11.055 - 11.113: 93.0485% ( 26) 00:10:08.922 11.113 - 11.171: 93.2097% ( 18) 00:10:08.922 11.171 - 11.229: 93.3083% ( 11) 00:10:08.922 11.229 - 11.287: 93.3799% ( 8) 00:10:08.922 11.287 - 11.345: 93.4695% ( 10) 00:10:08.922 11.345 - 11.404: 93.5591% ( 10) 00:10:08.922 11.404 - 11.462: 93.5949% ( 4) 00:10:08.922 11.462 - 11.520: 93.6218% ( 3) 00:10:08.922 11.520 - 11.578: 93.6666% ( 5) 00:10:08.922 11.578 - 11.636: 93.7024% ( 4) 00:10:08.922 11.636 - 11.695: 93.7203% ( 2) 00:10:08.922 11.695 - 11.753: 93.7920% ( 8) 00:10:08.922 11.753 - 11.811: 93.8189% ( 3) 00:10:08.922 11.811 - 11.869: 93.8368% ( 2) 00:10:08.922 11.869 - 11.927: 93.8816% ( 5) 00:10:08.922 11.927 - 11.985: 93.9353% ( 6) 00:10:08.922 11.985 - 12.044: 93.9712% ( 4) 00:10:08.922 12.044 - 12.102: 94.0159% ( 5) 00:10:08.922 12.102 - 12.160: 94.0339% ( 2) 00:10:08.922 12.160 - 12.218: 94.0787% ( 5) 00:10:08.922 12.218 - 12.276: 94.1055% ( 3) 00:10:08.922 12.276 - 12.335: 94.1414% ( 4) 00:10:08.922 12.335 - 12.393: 94.1593% ( 2) 00:10:08.922 12.393 - 12.451: 94.1951% ( 4) 00:10:08.922 12.451 - 12.509: 94.2130% ( 2) 00:10:08.922 12.509 - 12.567: 94.2309% ( 2) 00:10:08.922 12.567 - 12.625: 94.2668% ( 4) 00:10:08.922 12.625 - 12.684: 94.2847% ( 2) 00:10:08.922 12.684 - 12.742: 94.3205% ( 4) 00:10:08.922 12.742 - 12.800: 94.3653% ( 5) 00:10:08.922 12.800 - 12.858: 94.3832% ( 2) 00:10:08.922 12.858 - 12.916: 94.4101% ( 3) 00:10:08.922 12.916 - 12.975: 94.4549% ( 5) 00:10:08.922 12.975 - 13.033: 94.4639% ( 1) 00:10:08.922 13.033 - 13.091: 94.4728% ( 1) 00:10:08.922 13.091 - 13.149: 94.5176% ( 5) 00:10:08.922 13.149 - 13.207: 94.5355% ( 2) 00:10:08.922 13.207 - 13.265: 94.5534% ( 2) 00:10:08.922 13.265 - 13.324: 94.6161% ( 7) 00:10:08.922 13.382 - 13.440: 94.6430% ( 3) 00:10:08.922 13.440 - 13.498: 94.6699% ( 3) 00:10:08.922 13.556 - 13.615: 94.6788% ( 1) 00:10:08.922 13.615 - 13.673: 94.6968% ( 2) 00:10:08.922 13.673 - 13.731: 94.7505% ( 6) 00:10:08.922 13.731 - 13.789: 94.7774% ( 3) 00:10:08.922 13.789 - 13.847: 94.8311% ( 6) 00:10:08.922 13.847 - 13.905: 94.8580% ( 3) 00:10:08.922 13.905 - 13.964: 94.8759% ( 2) 00:10:08.922 13.964 - 14.022: 94.9028% ( 3) 00:10:08.922 14.022 - 14.080: 94.9207% ( 2) 00:10:08.922 14.080 - 14.138: 94.9655% ( 5) 00:10:08.922 14.138 - 14.196: 94.9745% ( 1) 00:10:08.922 14.196 - 14.255: 94.9924% ( 2) 00:10:08.922 14.255 - 14.313: 95.0193% ( 3) 00:10:08.922 14.313 - 14.371: 95.0730% ( 6) 00:10:08.922 14.371 - 14.429: 95.1268% ( 6) 00:10:08.922 14.429 - 14.487: 95.1626% ( 4) 00:10:08.922 14.487 - 14.545: 95.1895% ( 3) 00:10:08.922 14.545 - 14.604: 95.1984% ( 1) 00:10:08.922 14.604 - 14.662: 95.2522% ( 6) 00:10:08.922 14.662 - 14.720: 95.2790% ( 3) 00:10:08.922 14.720 - 14.778: 95.3059% ( 3) 00:10:08.922 14.778 - 14.836: 95.3686% ( 7) 00:10:08.922 14.895 - 15.011: 95.4313% ( 7) 00:10:08.922 15.011 - 15.127: 95.5299% ( 11) 00:10:08.922 15.127 - 15.244: 95.5657% ( 4) 00:10:08.922 15.244 - 15.360: 95.6374% ( 8) 00:10:08.922 15.360 - 15.476: 95.7090% ( 8) 00:10:08.922 15.476 - 15.593: 95.7897% ( 9) 00:10:08.922 15.593 - 15.709: 95.8524% ( 7) 00:10:08.922 15.709 - 15.825: 95.9778% ( 14) 00:10:08.922 15.825 - 15.942: 96.1569% ( 20) 00:10:08.922 15.942 - 16.058: 96.3092% ( 17) 00:10:08.922 16.058 - 16.175: 96.4436% ( 15) 00:10:08.922 16.175 - 16.291: 96.5869% ( 16) 00:10:08.922 16.291 - 16.407: 96.7124% ( 14) 00:10:08.922 16.407 - 16.524: 96.8736% ( 18) 00:10:08.922 16.524 - 16.640: 96.9721% ( 11) 00:10:08.922 16.640 - 16.756: 97.0707% ( 11) 00:10:08.922 16.756 - 16.873: 97.2409% ( 19) 00:10:08.922 16.873 - 16.989: 97.3394% ( 11) 00:10:08.922 16.989 - 17.105: 97.4648% ( 14) 00:10:08.922 17.105 - 17.222: 97.5634% ( 11) 00:10:08.922 17.222 - 17.338: 97.6709% ( 12) 00:10:08.922 17.338 - 17.455: 97.7784% ( 12) 00:10:08.922 17.455 - 17.571: 97.9396% ( 18) 00:10:08.922 17.571 - 17.687: 98.0382% ( 11) 00:10:08.922 17.687 - 17.804: 98.1277% ( 10) 00:10:08.922 17.804 - 17.920: 98.2890% ( 18) 00:10:08.922 17.920 - 18.036: 98.4682% ( 20) 00:10:08.922 18.036 - 18.153: 98.5129% ( 5) 00:10:08.922 18.153 - 18.269: 98.5936% ( 9) 00:10:08.922 18.269 - 18.385: 98.6563% ( 7) 00:10:08.922 18.385 - 18.502: 98.7459% ( 10) 00:10:08.922 18.502 - 18.618: 98.8175% ( 8) 00:10:08.922 18.618 - 18.735: 98.8444% ( 3) 00:10:08.922 18.735 - 18.851: 98.8802% ( 4) 00:10:08.922 18.851 - 18.967: 98.8892% ( 1) 00:10:08.922 18.967 - 19.084: 98.9161% ( 3) 00:10:08.922 19.084 - 19.200: 98.9429% ( 3) 00:10:08.922 19.200 - 19.316: 98.9877% ( 5) 00:10:08.922 19.316 - 19.433: 99.0236% ( 4) 00:10:08.922 19.433 - 19.549: 99.0325% ( 1) 00:10:08.922 19.549 - 19.665: 99.0504% ( 2) 00:10:08.922 19.665 - 19.782: 99.0863% ( 4) 00:10:08.922 19.782 - 19.898: 99.1221% ( 4) 00:10:08.922 19.898 - 20.015: 99.1579% ( 4) 00:10:08.922 20.015 - 20.131: 99.1848% ( 3) 00:10:08.922 20.247 - 20.364: 99.1938% ( 1) 00:10:08.922 20.364 - 20.480: 99.2027% ( 1) 00:10:08.922 20.480 - 20.596: 99.2206% ( 2) 00:10:08.922 20.596 - 20.713: 99.2296% ( 1) 00:10:08.922 20.713 - 20.829: 99.2386% ( 1) 00:10:08.922 20.829 - 20.945: 99.2654% ( 3) 00:10:08.922 20.945 - 21.062: 99.3013% ( 4) 00:10:08.922 21.062 - 21.178: 99.3192% ( 2) 00:10:08.922 21.178 - 21.295: 99.3281% ( 1) 00:10:08.922 21.295 - 21.411: 99.3371% ( 1) 00:10:08.922 21.411 - 21.527: 99.3550% ( 2) 00:10:08.922 21.876 - 21.993: 99.3640% ( 1) 00:10:08.922 22.109 - 22.225: 99.3729% ( 1) 00:10:08.922 22.225 - 22.342: 99.3819% ( 1) 00:10:08.922 22.342 - 22.458: 99.3908% ( 1) 00:10:08.922 22.458 - 22.575: 99.3998% ( 1) 00:10:08.922 22.575 - 22.691: 99.4177% ( 2) 00:10:08.922 23.273 - 23.389: 99.4267% ( 1) 00:10:08.922 23.855 - 23.971: 99.4356% ( 1) 00:10:08.922 24.087 - 24.204: 99.4625% ( 3) 00:10:08.922 24.204 - 24.320: 99.4804% ( 2) 00:10:08.922 24.320 - 24.436: 99.5431% ( 7) 00:10:08.922 24.436 - 24.553: 99.6775% ( 15) 00:10:08.922 24.553 - 24.669: 99.7223% ( 5) 00:10:08.922 24.669 - 24.785: 99.7402% ( 2) 00:10:08.922 24.785 - 24.902: 99.7671% ( 3) 00:10:08.922 24.902 - 25.018: 99.7760% ( 1) 00:10:08.922 25.018 - 25.135: 99.7850% ( 1) 00:10:08.922 25.251 - 25.367: 99.7940% ( 1) 00:10:08.922 25.367 - 25.484: 99.8029% ( 1) 00:10:08.922 25.484 - 25.600: 99.8119% ( 1) 00:10:08.922 25.600 - 25.716: 99.8298% ( 2) 00:10:08.922 25.716 - 25.833: 99.8388% ( 1) 00:10:08.922 25.949 - 26.065: 99.8477% ( 1) 00:10:08.922 26.298 - 26.415: 99.8567% ( 1) 00:10:08.922 28.393 - 28.509: 99.8656% ( 1) 00:10:08.922 28.625 - 28.742: 99.8746% ( 1) 00:10:08.922 29.091 - 29.207: 99.8925% ( 2) 00:10:08.922 29.673 - 29.789: 99.9015% ( 1) 00:10:08.922 30.953 - 31.185: 99.9104% ( 1) 00:10:08.922 31.884 - 32.116: 99.9194% ( 1) 00:10:08.922 32.349 - 32.582: 99.9283% ( 1) 00:10:08.922 33.978 - 34.211: 99.9373% ( 1) 00:10:08.922 35.142 - 35.375: 99.9463% ( 1) 00:10:08.922 42.124 - 42.356: 99.9552% ( 1) 00:10:08.923 49.804 - 50.036: 99.9642% ( 1) 00:10:08.923 54.458 - 54.691: 99.9731% ( 1) 00:10:08.923 87.505 - 87.971: 99.9821% ( 1) 00:10:08.923 87.971 - 88.436: 99.9910% ( 1) 00:10:08.923 89.367 - 89.833: 100.0000% ( 1) 00:10:08.923 00:10:08.923 00:10:08.923 real 0m1.301s 00:10:08.923 user 0m1.116s 00:10:08.923 sys 0m0.129s 00:10:08.923 09:48:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.923 ************************************ 00:10:08.923 END TEST nvme_overhead 00:10:08.923 ************************************ 00:10:08.923 09:48:02 -- common/autotest_common.sh@10 -- # set +x 00:10:08.923 09:48:02 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:08.923 09:48:02 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:08.923 09:48:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:08.923 09:48:02 -- common/autotest_common.sh@10 -- # set +x 00:10:08.923 ************************************ 00:10:08.923 START TEST nvme_arbitration 00:10:08.923 ************************************ 00:10:08.923 09:48:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:13.110 Initializing NVMe Controllers 00:10:13.110 Attached to 0000:00:06.0 00:10:13.110 Attached to 0000:00:07.0 00:10:13.110 Attached to 0000:00:09.0 00:10:13.110 Attached to 0000:00:08.0 00:10:13.110 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:13.110 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:13.110 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:13.110 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:13.110 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:13.110 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:13.110 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:13.110 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:13.110 Initialization complete. Launching workers. 00:10:13.110 Starting thread on core 1 with urgent priority queue 00:10:13.110 Starting thread on core 2 with urgent priority queue 00:10:13.110 Starting thread on core 3 with urgent priority queue 00:10:13.110 Starting thread on core 0 with urgent priority queue 00:10:13.110 QEMU NVMe Ctrl (12340 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:10:13.110 QEMU NVMe Ctrl (12342 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:10:13.110 QEMU NVMe Ctrl (12341 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:10:13.110 QEMU NVMe Ctrl (12342 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:10:13.110 QEMU NVMe Ctrl (12343 ) core 2: 661.33 IO/s 151.21 secs/100000 ios 00:10:13.110 QEMU NVMe Ctrl (12342 ) core 3: 661.33 IO/s 151.21 secs/100000 ios 00:10:13.110 ======================================================== 00:10:13.110 00:10:13.110 00:10:13.110 real 0m3.505s 00:10:13.110 user 0m9.536s 00:10:13.110 sys 0m0.160s 00:10:13.110 09:48:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.110 ************************************ 00:10:13.110 END TEST nvme_arbitration 00:10:13.110 ************************************ 00:10:13.110 09:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:13.110 09:48:06 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:13.110 09:48:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:13.110 09:48:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.110 09:48:06 -- common/autotest_common.sh@10 -- # set +x 00:10:13.110 ************************************ 00:10:13.110 START TEST nvme_single_aen 00:10:13.110 ************************************ 00:10:13.110 09:48:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:13.110 [2024-06-10 09:48:06.074081] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:13.110 [2024-06-10 09:48:06.074177] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.110 [2024-06-10 09:48:06.238559] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:13.110 [2024-06-10 09:48:06.240141] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:13.110 [2024-06-10 09:48:06.241534] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:13.110 [2024-06-10 09:48:06.242880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:13.110 Asynchronous Event Request test 00:10:13.110 Attached to 0000:00:06.0 00:10:13.110 Attached to 0000:00:07.0 00:10:13.110 Attached to 0000:00:09.0 00:10:13.110 Attached to 0000:00:08.0 00:10:13.110 Reset controller to setup AER completions for this process 00:10:13.110 Registering asynchronous event callbacks... 00:10:13.110 Getting orig temperature thresholds of all controllers 00:10:13.110 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:13.110 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:13.110 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:13.110 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:13.110 Setting all controllers temperature threshold low to trigger AER 00:10:13.110 Waiting for all controllers temperature threshold to be set lower 00:10:13.110 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:13.110 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:13.110 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:13.110 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:13.110 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:13.110 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:13.110 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:13.110 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:13.110 Waiting for all controllers to trigger AER and reset threshold 00:10:13.110 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.110 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.110 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.110 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.110 Cleaning up... 00:10:13.110 00:10:13.110 real 0m0.247s 00:10:13.110 user 0m0.105s 00:10:13.110 sys 0m0.097s 00:10:13.110 09:48:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.110 ************************************ 00:10:13.110 END TEST nvme_single_aen 00:10:13.110 09:48:06 -- common/autotest_common.sh@10 -- # set +x 00:10:13.110 ************************************ 00:10:13.110 09:48:06 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:13.110 09:48:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:13.110 09:48:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.110 09:48:06 -- common/autotest_common.sh@10 -- # set +x 00:10:13.110 ************************************ 00:10:13.110 START TEST nvme_doorbell_aers 00:10:13.110 ************************************ 00:10:13.110 09:48:06 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:10:13.110 09:48:06 -- nvme/nvme.sh@70 -- # bdfs=() 00:10:13.110 09:48:06 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:13.110 09:48:06 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:13.110 09:48:06 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:13.110 09:48:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:13.110 09:48:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:13.110 09:48:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:13.110 09:48:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:13.110 09:48:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:13.110 09:48:06 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:13.110 09:48:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:13.110 09:48:06 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:13.110 09:48:06 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:13.110 [2024-06-10 09:48:06.647311] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:23.097 Executing: test_write_invalid_db 00:10:23.097 Waiting for AER completion... 00:10:23.097 Failure: test_write_invalid_db 00:10:23.097 00:10:23.097 Executing: test_invalid_db_write_overflow_sq 00:10:23.097 Waiting for AER completion... 00:10:23.097 Failure: test_invalid_db_write_overflow_sq 00:10:23.097 00:10:23.097 Executing: test_invalid_db_write_overflow_cq 00:10:23.097 Waiting for AER completion... 00:10:23.097 Failure: test_invalid_db_write_overflow_cq 00:10:23.097 00:10:23.097 09:48:16 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:23.097 09:48:16 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:23.097 [2024-06-10 09:48:16.724997] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:33.100 Executing: test_write_invalid_db 00:10:33.100 Waiting for AER completion... 00:10:33.100 Failure: test_write_invalid_db 00:10:33.100 00:10:33.100 Executing: test_invalid_db_write_overflow_sq 00:10:33.100 Waiting for AER completion... 00:10:33.100 Failure: test_invalid_db_write_overflow_sq 00:10:33.100 00:10:33.100 Executing: test_invalid_db_write_overflow_cq 00:10:33.100 Waiting for AER completion... 00:10:33.100 Failure: test_invalid_db_write_overflow_cq 00:10:33.100 00:10:33.100 09:48:26 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:33.100 09:48:26 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:33.100 [2024-06-10 09:48:26.756257] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:43.073 Executing: test_write_invalid_db 00:10:43.073 Waiting for AER completion... 00:10:43.073 Failure: test_write_invalid_db 00:10:43.073 00:10:43.073 Executing: test_invalid_db_write_overflow_sq 00:10:43.073 Waiting for AER completion... 00:10:43.073 Failure: test_invalid_db_write_overflow_sq 00:10:43.073 00:10:43.073 Executing: test_invalid_db_write_overflow_cq 00:10:43.073 Waiting for AER completion... 00:10:43.073 Failure: test_invalid_db_write_overflow_cq 00:10:43.073 00:10:43.073 09:48:36 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:43.073 09:48:36 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:43.073 [2024-06-10 09:48:36.804656] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.042 Executing: test_write_invalid_db 00:10:53.042 Waiting for AER completion... 00:10:53.042 Failure: test_write_invalid_db 00:10:53.042 00:10:53.042 Executing: test_invalid_db_write_overflow_sq 00:10:53.042 Waiting for AER completion... 00:10:53.042 Failure: test_invalid_db_write_overflow_sq 00:10:53.042 00:10:53.042 Executing: test_invalid_db_write_overflow_cq 00:10:53.042 Waiting for AER completion... 00:10:53.042 Failure: test_invalid_db_write_overflow_cq 00:10:53.042 00:10:53.042 00:10:53.042 real 0m40.238s 00:10:53.042 user 0m33.648s 00:10:53.042 sys 0m6.242s 00:10:53.042 09:48:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.042 09:48:46 -- common/autotest_common.sh@10 -- # set +x 00:10:53.042 ************************************ 00:10:53.043 END TEST nvme_doorbell_aers 00:10:53.043 ************************************ 00:10:53.043 09:48:46 -- nvme/nvme.sh@97 -- # uname 00:10:53.043 09:48:46 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:53.043 09:48:46 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:53.043 09:48:46 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:53.043 09:48:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:53.043 09:48:46 -- common/autotest_common.sh@10 -- # set +x 00:10:53.043 ************************************ 00:10:53.043 START TEST nvme_multi_aen 00:10:53.043 ************************************ 00:10:53.043 09:48:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:53.043 [2024-06-10 09:48:46.678903] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:53.043 [2024-06-10 09:48:46.679001] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.302 [2024-06-10 09:48:46.850858] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:53.302 [2024-06-10 09:48:46.850946] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.851022] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.851057] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.852937] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:53.302 [2024-06-10 09:48:46.852982] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.853019] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.853051] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.854398] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:53.302 [2024-06-10 09:48:46.854451] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.854508] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.854555] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.855826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:53.302 [2024-06-10 09:48:46.855869] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.855929] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.855963] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65008) is not found. Dropping the request. 00:10:53.302 [2024-06-10 09:48:46.866413] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:53.302 Child process pid: 65527 00:10:53.302 [2024-06-10 09:48:46.866611] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.561 [Child] Asynchronous Event Request test 00:10:53.561 [Child] Attached to 0000:00:06.0 00:10:53.561 [Child] Attached to 0000:00:07.0 00:10:53.561 [Child] Attached to 0000:00:09.0 00:10:53.561 [Child] Attached to 0000:00:08.0 00:10:53.561 [Child] Registering asynchronous event callbacks... 00:10:53.561 [Child] Getting orig temperature thresholds of all controllers 00:10:53.561 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.561 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.561 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.561 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.561 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:53.561 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.561 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.561 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.561 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.561 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.561 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.561 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.561 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.561 [Child] Cleaning up... 00:10:53.561 Asynchronous Event Request test 00:10:53.561 Attached to 0000:00:06.0 00:10:53.561 Attached to 0000:00:07.0 00:10:53.561 Attached to 0000:00:09.0 00:10:53.561 Attached to 0000:00:08.0 00:10:53.561 Reset controller to setup AER completions for this process 00:10:53.561 Registering asynchronous event callbacks... 00:10:53.561 Getting orig temperature thresholds of all controllers 00:10:53.561 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.561 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.561 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.561 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.561 Setting all controllers temperature threshold low to trigger AER 00:10:53.561 Waiting for all controllers temperature threshold to be set lower 00:10:53.561 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.561 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:53.561 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.561 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:53.561 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.561 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:53.561 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.561 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:53.561 Waiting for all controllers to trigger AER and reset threshold 00:10:53.561 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.561 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.561 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.561 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.561 Cleaning up... 00:10:53.561 00:10:53.561 real 0m0.528s 00:10:53.562 user 0m0.192s 00:10:53.562 sys 0m0.233s 00:10:53.562 09:48:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.562 09:48:47 -- common/autotest_common.sh@10 -- # set +x 00:10:53.562 ************************************ 00:10:53.562 END TEST nvme_multi_aen 00:10:53.562 ************************************ 00:10:53.562 09:48:47 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:53.562 09:48:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:53.562 09:48:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:53.562 09:48:47 -- common/autotest_common.sh@10 -- # set +x 00:10:53.562 ************************************ 00:10:53.562 START TEST nvme_startup 00:10:53.562 ************************************ 00:10:53.562 09:48:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:53.821 Initializing NVMe Controllers 00:10:53.821 Attached to 0000:00:06.0 00:10:53.821 Attached to 0000:00:07.0 00:10:53.821 Attached to 0000:00:09.0 00:10:53.821 Attached to 0000:00:08.0 00:10:53.821 Initialization complete. 00:10:53.821 Time used:163816.656 (us). 00:10:53.821 00:10:53.821 real 0m0.238s 00:10:53.821 user 0m0.074s 00:10:53.821 sys 0m0.123s 00:10:53.821 09:48:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.821 09:48:47 -- common/autotest_common.sh@10 -- # set +x 00:10:53.821 ************************************ 00:10:53.821 END TEST nvme_startup 00:10:53.821 ************************************ 00:10:53.821 09:48:47 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:53.821 09:48:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:53.821 09:48:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:53.821 09:48:47 -- common/autotest_common.sh@10 -- # set +x 00:10:53.821 ************************************ 00:10:53.821 START TEST nvme_multi_secondary 00:10:53.822 ************************************ 00:10:53.822 09:48:47 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:10:53.822 09:48:47 -- nvme/nvme.sh@52 -- # pid0=65583 00:10:53.822 09:48:47 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:53.822 09:48:47 -- nvme/nvme.sh@54 -- # pid1=65584 00:10:53.822 09:48:47 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:53.822 09:48:47 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:57.103 Initializing NVMe Controllers 00:10:57.103 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:57.103 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:57.103 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:57.103 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:57.103 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:57.103 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:57.103 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:57.103 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:57.103 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:57.103 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:57.103 Initialization complete. Launching workers. 00:10:57.103 ======================================================== 00:10:57.103 Latency(us) 00:10:57.103 Device Information : IOPS MiB/s Average min max 00:10:57.103 PCIE (0000:00:06.0) NSID 1 from core 1: 5541.32 21.65 2885.71 1074.46 6367.94 00:10:57.103 PCIE (0000:00:07.0) NSID 1 from core 1: 5541.32 21.65 2886.91 1121.04 7059.54 00:10:57.103 PCIE (0000:00:09.0) NSID 1 from core 1: 5541.32 21.65 2886.99 1101.52 6957.86 00:10:57.103 PCIE (0000:00:08.0) NSID 1 from core 1: 5541.32 21.65 2886.95 1112.39 6438.01 00:10:57.103 PCIE (0000:00:08.0) NSID 2 from core 1: 5541.32 21.65 2886.90 1105.20 6043.51 00:10:57.103 PCIE (0000:00:08.0) NSID 3 from core 1: 5541.32 21.65 2886.85 1111.25 5934.35 00:10:57.103 ======================================================== 00:10:57.103 Total : 33247.90 129.87 2886.72 1074.46 7059.54 00:10:57.103 00:10:57.361 Initializing NVMe Controllers 00:10:57.361 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:57.361 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:57.361 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:57.361 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:57.361 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:57.361 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:57.361 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:57.361 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:57.361 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:57.361 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:57.361 Initialization complete. Launching workers. 00:10:57.361 ======================================================== 00:10:57.361 Latency(us) 00:10:57.361 Device Information : IOPS MiB/s Average min max 00:10:57.361 PCIE (0000:00:06.0) NSID 1 from core 2: 2442.57 9.54 6547.18 1874.34 14023.74 00:10:57.361 PCIE (0000:00:07.0) NSID 1 from core 2: 2442.57 9.54 6549.59 1774.21 15446.09 00:10:57.361 PCIE (0000:00:09.0) NSID 1 from core 2: 2442.57 9.54 6544.04 1680.80 15569.60 00:10:57.361 PCIE (0000:00:08.0) NSID 1 from core 2: 2442.57 9.54 6541.57 1679.80 13929.13 00:10:57.361 PCIE (0000:00:08.0) NSID 2 from core 2: 2442.57 9.54 6541.43 1539.98 14330.32 00:10:57.361 PCIE (0000:00:08.0) NSID 3 from core 2: 2442.57 9.54 6541.37 1371.27 17128.72 00:10:57.361 ======================================================== 00:10:57.361 Total : 14655.40 57.25 6544.20 1371.27 17128.72 00:10:57.361 00:10:57.361 09:48:51 -- nvme/nvme.sh@56 -- # wait 65583 00:10:59.889 Initializing NVMe Controllers 00:10:59.889 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:59.889 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:59.889 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:59.889 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:59.889 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:59.889 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:59.889 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:59.889 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:59.889 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:59.889 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:59.889 Initialization complete. Launching workers. 00:10:59.889 ======================================================== 00:10:59.889 Latency(us) 00:10:59.889 Device Information : IOPS MiB/s Average min max 00:10:59.889 PCIE (0000:00:06.0) NSID 1 from core 0: 8228.86 32.14 1942.85 991.20 5964.31 00:10:59.889 PCIE (0000:00:07.0) NSID 1 from core 0: 8228.86 32.14 1943.89 1025.07 6558.51 00:10:59.889 PCIE (0000:00:09.0) NSID 1 from core 0: 8228.86 32.14 1943.84 1025.88 6890.22 00:10:59.889 PCIE (0000:00:08.0) NSID 1 from core 0: 8228.86 32.14 1943.80 1035.52 6884.82 00:10:59.889 PCIE (0000:00:08.0) NSID 2 from core 0: 8228.86 32.14 1943.75 992.52 6557.83 00:10:59.889 PCIE (0000:00:08.0) NSID 3 from core 0: 8228.86 32.14 1943.70 892.85 6130.11 00:10:59.889 ======================================================== 00:10:59.889 Total : 49373.17 192.86 1943.64 892.85 6890.22 00:10:59.889 00:10:59.889 09:48:53 -- nvme/nvme.sh@57 -- # wait 65584 00:10:59.889 09:48:53 -- nvme/nvme.sh@61 -- # pid0=65656 00:10:59.889 09:48:53 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:59.889 09:48:53 -- nvme/nvme.sh@63 -- # pid1=65657 00:10:59.889 09:48:53 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:59.889 09:48:53 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:03.170 Initializing NVMe Controllers 00:11:03.170 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:03.170 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:03.170 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:03.170 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:03.170 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:11:03.170 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:11:03.170 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:11:03.170 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:11:03.170 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:11:03.170 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:11:03.170 Initialization complete. Launching workers. 00:11:03.170 ======================================================== 00:11:03.170 Latency(us) 00:11:03.170 Device Information : IOPS MiB/s Average min max 00:11:03.170 PCIE (0000:00:06.0) NSID 1 from core 1: 5700.84 22.27 2804.90 1079.79 5630.57 00:11:03.170 PCIE (0000:00:07.0) NSID 1 from core 1: 5700.84 22.27 2806.44 1129.35 6134.34 00:11:03.170 PCIE (0000:00:09.0) NSID 1 from core 1: 5700.84 22.27 2806.50 1114.92 5849.18 00:11:03.170 PCIE (0000:00:08.0) NSID 1 from core 1: 5700.84 22.27 2806.65 1126.23 5293.65 00:11:03.170 PCIE (0000:00:08.0) NSID 2 from core 1: 5700.84 22.27 2806.68 1101.19 5445.98 00:11:03.170 PCIE (0000:00:08.0) NSID 3 from core 1: 5700.84 22.27 2806.77 1103.57 5621.01 00:11:03.170 ======================================================== 00:11:03.170 Total : 34205.04 133.61 2806.32 1079.79 6134.34 00:11:03.170 00:11:03.170 Initializing NVMe Controllers 00:11:03.170 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:03.170 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:03.170 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:03.170 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:03.170 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:03.170 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:03.170 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:03.170 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:03.170 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:03.170 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:03.170 Initialization complete. Launching workers. 00:11:03.170 ======================================================== 00:11:03.170 Latency(us) 00:11:03.170 Device Information : IOPS MiB/s Average min max 00:11:03.170 PCIE (0000:00:06.0) NSID 1 from core 0: 5476.36 21.39 2919.68 1063.62 6136.00 00:11:03.170 PCIE (0000:00:07.0) NSID 1 from core 0: 5476.36 21.39 2920.54 1078.59 5970.93 00:11:03.170 PCIE (0000:00:09.0) NSID 1 from core 0: 5476.36 21.39 2920.22 1006.15 6323.85 00:11:03.170 PCIE (0000:00:08.0) NSID 1 from core 0: 5476.36 21.39 2919.86 963.00 6479.02 00:11:03.170 PCIE (0000:00:08.0) NSID 2 from core 0: 5476.36 21.39 2919.48 881.29 6668.74 00:11:03.170 PCIE (0000:00:08.0) NSID 3 from core 0: 5476.36 21.39 2919.15 862.42 6383.45 00:11:03.170 ======================================================== 00:11:03.170 Total : 32858.15 128.35 2919.82 862.42 6668.74 00:11:03.170 00:11:05.073 Initializing NVMe Controllers 00:11:05.073 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:05.073 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:05.073 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:05.073 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:05.073 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:11:05.073 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:11:05.073 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:11:05.073 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:11:05.073 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:11:05.073 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:11:05.073 Initialization complete. Launching workers. 00:11:05.073 ======================================================== 00:11:05.073 Latency(us) 00:11:05.073 Device Information : IOPS MiB/s Average min max 00:11:05.073 PCIE (0000:00:06.0) NSID 1 from core 2: 3620.64 14.14 4416.91 1020.62 17613.07 00:11:05.073 PCIE (0000:00:07.0) NSID 1 from core 2: 3620.64 14.14 4418.48 1041.36 17262.34 00:11:05.073 PCIE (0000:00:09.0) NSID 1 from core 2: 3620.64 14.14 4418.57 1059.28 17088.26 00:11:05.073 PCIE (0000:00:08.0) NSID 1 from core 2: 3620.64 14.14 4418.18 952.81 21518.85 00:11:05.073 PCIE (0000:00:08.0) NSID 2 from core 2: 3620.64 14.14 4418.43 924.13 17039.46 00:11:05.074 PCIE (0000:00:08.0) NSID 3 from core 2: 3620.64 14.14 4418.18 835.94 17414.86 00:11:05.074 ======================================================== 00:11:05.074 Total : 21723.82 84.86 4418.12 835.94 21518.85 00:11:05.074 00:11:05.074 ************************************ 00:11:05.074 END TEST nvme_multi_secondary 00:11:05.074 ************************************ 00:11:05.074 09:48:58 -- nvme/nvme.sh@65 -- # wait 65656 00:11:05.074 09:48:58 -- nvme/nvme.sh@66 -- # wait 65657 00:11:05.074 00:11:05.074 real 0m11.234s 00:11:05.074 user 0m19.052s 00:11:05.074 sys 0m0.847s 00:11:05.074 09:48:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.074 09:48:58 -- common/autotest_common.sh@10 -- # set +x 00:11:05.074 09:48:58 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:05.074 09:48:58 -- nvme/nvme.sh@102 -- # kill_stub 00:11:05.074 09:48:58 -- common/autotest_common.sh@1065 -- # [[ -e /proc/64583 ]] 00:11:05.074 09:48:58 -- common/autotest_common.sh@1066 -- # kill 64583 00:11:05.074 09:48:58 -- common/autotest_common.sh@1067 -- # wait 64583 00:11:05.332 [2024-06-10 09:48:58.934585] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:05.332 [2024-06-10 09:48:58.934667] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:05.332 [2024-06-10 09:48:58.934693] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:05.332 [2024-06-10 09:48:58.934740] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:06.348 [2024-06-10 09:48:59.938479] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:06.348 [2024-06-10 09:48:59.938562] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:06.348 [2024-06-10 09:48:59.938588] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:06.348 [2024-06-10 09:48:59.938610] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:06.915 [2024-06-10 09:49:00.442358] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:06.915 [2024-06-10 09:49:00.442434] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:06.915 [2024-06-10 09:49:00.442459] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:06.915 [2024-06-10 09:49:00.442481] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:08.291 [2024-06-10 09:49:01.961884] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:08.291 [2024-06-10 09:49:01.961962] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:08.291 [2024-06-10 09:49:01.961995] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:08.291 [2024-06-10 09:49:01.962029] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:11:08.548 09:49:02 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:11:08.548 09:49:02 -- common/autotest_common.sh@1073 -- # echo 2 00:11:08.549 09:49:02 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:08.549 09:49:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:08.549 09:49:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:08.549 09:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:08.549 ************************************ 00:11:08.549 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:08.549 ************************************ 00:11:08.549 09:49:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:08.549 * Looking for test storage... 00:11:08.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:08.806 09:49:02 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:08.806 09:49:02 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:08.806 09:49:02 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:08.806 09:49:02 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:08.806 09:49:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:08.806 09:49:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:08.806 09:49:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:08.806 09:49:02 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:08.806 09:49:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:08.806 09:49:02 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:08.806 09:49:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:08.806 09:49:02 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:11:08.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65841 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65841 00:11:08.806 09:49:02 -- common/autotest_common.sh@819 -- # '[' -z 65841 ']' 00:11:08.806 09:49:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:08.806 09:49:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.806 09:49:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:08.806 09:49:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.806 09:49:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:08.806 09:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:08.806 [2024-06-10 09:49:02.498245] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:08.806 [2024-06-10 09:49:02.498425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65841 ] 00:11:09.065 [2024-06-10 09:49:02.689530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.323 [2024-06-10 09:49:02.872767] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:09.323 [2024-06-10 09:49:02.873244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.323 [2024-06-10 09:49:02.873422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.323 [2024-06-10 09:49:02.873552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.323 [2024-06-10 09:49:02.873567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.696 09:49:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:10.696 09:49:04 -- common/autotest_common.sh@852 -- # return 0 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:11:10.696 09:49:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:10.696 09:49:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.696 nvme0n1 00:11:10.696 09:49:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Ny95X.txt 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:10.696 09:49:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:10.696 09:49:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.696 true 00:11:10.696 09:49:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1718012944 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65876 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:10.696 09:49:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:12.595 09:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.595 09:49:06 -- common/autotest_common.sh@10 -- # set +x 00:11:12.595 [2024-06-10 09:49:06.225860] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:12.595 [2024-06-10 09:49:06.226262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:12.595 [2024-06-10 09:49:06.226305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:12.595 [2024-06-10 09:49:06.226326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.595 [2024-06-10 09:49:06.228218] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:12.595 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65876 00:11:12.595 09:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65876 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65876 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:12.595 09:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.595 09:49:06 -- common/autotest_common.sh@10 -- # set +x 00:11:12.595 09:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Ny95X.txt 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Ny95X.txt 00:11:12.595 09:49:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65841 00:11:12.596 09:49:06 -- common/autotest_common.sh@926 -- # '[' -z 65841 ']' 00:11:12.596 09:49:06 -- common/autotest_common.sh@930 -- # kill -0 65841 00:11:12.596 09:49:06 -- common/autotest_common.sh@931 -- # uname 00:11:12.596 09:49:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:12.596 09:49:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65841 00:11:12.854 killing process with pid 65841 00:11:12.854 09:49:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:12.854 09:49:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:12.854 09:49:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65841' 00:11:12.854 09:49:06 -- common/autotest_common.sh@945 -- # kill 65841 00:11:12.854 09:49:06 -- common/autotest_common.sh@950 -- # wait 65841 00:11:14.753 09:49:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:14.753 09:49:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:14.753 ************************************ 00:11:14.753 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:14.753 ************************************ 00:11:14.753 00:11:14.753 real 0m6.183s 00:11:14.753 user 0m22.045s 00:11:14.753 sys 0m0.588s 00:11:14.753 09:49:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.753 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:11:14.753 09:49:08 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:14.753 09:49:08 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:14.753 09:49:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:14.753 09:49:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.753 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:11:14.753 ************************************ 00:11:14.753 START TEST nvme_fio 00:11:14.753 ************************************ 00:11:14.753 09:49:08 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:11:14.753 09:49:08 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:14.753 09:49:08 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:14.753 09:49:08 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:14.753 09:49:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:14.753 09:49:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:14.753 09:49:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:14.753 09:49:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:14.753 09:49:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:15.011 09:49:08 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:15.011 09:49:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:15.011 09:49:08 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:11:15.011 09:49:08 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:15.011 09:49:08 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:15.011 09:49:08 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:15.011 09:49:08 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:15.269 09:49:08 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:15.269 09:49:08 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:15.526 09:49:09 -- nvme/nvme.sh@41 -- # bs=4096 00:11:15.526 09:49:09 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:15.526 09:49:09 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:15.526 09:49:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:15.526 09:49:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:15.526 09:49:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:15.526 09:49:09 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:15.526 09:49:09 -- common/autotest_common.sh@1320 -- # shift 00:11:15.526 09:49:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:15.526 09:49:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:15.526 09:49:09 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:15.526 09:49:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:15.526 09:49:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:15.526 09:49:09 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:15.526 09:49:09 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:15.526 09:49:09 -- common/autotest_common.sh@1326 -- # break 00:11:15.526 09:49:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:15.526 09:49:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:15.526 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:15.526 fio-3.35 00:11:15.526 Starting 1 thread 00:11:18.807 00:11:18.807 test: (groupid=0, jobs=1): err= 0: pid=66021: Mon Jun 10 09:49:12 2024 00:11:18.807 read: IOPS=16.9k, BW=66.1MiB/s (69.3MB/s)(132MiB/2001msec) 00:11:18.807 slat (nsec): min=4526, max=54006, avg=5752.77, stdev=1652.13 00:11:18.807 clat (usec): min=222, max=9050, avg=3756.62, stdev=485.81 00:11:18.807 lat (usec): min=227, max=9104, avg=3762.37, stdev=486.41 00:11:18.807 clat percentiles (usec): 00:11:18.807 | 1.00th=[ 3064], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:11:18.807 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3720], 00:11:18.807 | 70.00th=[ 3752], 80.00th=[ 3982], 90.00th=[ 4359], 95.00th=[ 4490], 00:11:18.807 | 99.00th=[ 5276], 99.50th=[ 6915], 99.90th=[ 7963], 99.95th=[ 8094], 00:11:18.807 | 99.99th=[ 8979] 00:11:18.807 bw ( KiB/s): min=58104, max=71400, per=98.83%, avg=66881.00, stdev=7602.23, samples=3 00:11:18.807 iops : min=14526, max=17850, avg=16720.00, stdev=1900.35, samples=3 00:11:18.807 write: IOPS=17.0k, BW=66.3MiB/s (69.5MB/s)(133MiB/2001msec); 0 zone resets 00:11:18.807 slat (usec): min=4, max=190, avg= 5.90, stdev= 1.93 00:11:18.807 clat (usec): min=294, max=8970, avg=3771.46, stdev=495.61 00:11:18.807 lat (usec): min=300, max=8981, avg=3777.36, stdev=496.22 00:11:18.807 clat percentiles (usec): 00:11:18.807 | 1.00th=[ 3097], 5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3490], 00:11:18.808 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720], 00:11:18.808 | 70.00th=[ 3785], 80.00th=[ 3982], 90.00th=[ 4359], 95.00th=[ 4490], 00:11:18.808 | 99.00th=[ 5604], 99.50th=[ 7046], 99.90th=[ 8029], 99.95th=[ 8094], 00:11:18.808 | 99.99th=[ 8717] 00:11:18.808 bw ( KiB/s): min=58392, max=71544, per=98.53%, avg=66844.33, stdev=7335.23, samples=3 00:11:18.808 iops : min=14598, max=17886, avg=16711.00, stdev=1833.74, samples=3 00:11:18.808 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:18.808 lat (msec) : 2=0.13%, 4=80.03%, 10=19.81% 00:11:18.808 cpu : usr=98.90%, sys=0.15%, ctx=4, majf=0, minf=607 00:11:18.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:18.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.808 issued rwts: total=33853,33938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.808 00:11:18.808 Run status group 0 (all jobs): 00:11:18.808 READ: bw=66.1MiB/s (69.3MB/s), 66.1MiB/s-66.1MiB/s (69.3MB/s-69.3MB/s), io=132MiB (139MB), run=2001-2001msec 00:11:18.808 WRITE: bw=66.3MiB/s (69.5MB/s), 66.3MiB/s-66.3MiB/s (69.5MB/s-69.5MB/s), io=133MiB (139MB), run=2001-2001msec 00:11:18.808 ----------------------------------------------------- 00:11:18.808 Suppressions used: 00:11:18.808 count bytes template 00:11:18.808 1 32 /usr/src/fio/parse.c 00:11:18.808 1 8 libtcmalloc_minimal.so 00:11:18.808 ----------------------------------------------------- 00:11:18.808 00:11:19.068 09:49:12 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:19.068 09:49:12 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:19.068 09:49:12 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:19.068 09:49:12 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:19.326 09:49:12 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:19.326 09:49:12 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:19.326 09:49:13 -- nvme/nvme.sh@41 -- # bs=4096 00:11:19.326 09:49:13 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:19.326 09:49:13 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:19.326 09:49:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:19.326 09:49:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:19.326 09:49:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:19.326 09:49:13 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:19.327 09:49:13 -- common/autotest_common.sh@1320 -- # shift 00:11:19.327 09:49:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:19.327 09:49:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:19.327 09:49:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:19.327 09:49:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:19.327 09:49:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:19.585 09:49:13 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:19.585 09:49:13 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:19.585 09:49:13 -- common/autotest_common.sh@1326 -- # break 00:11:19.585 09:49:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:19.585 09:49:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:19.585 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:19.585 fio-3.35 00:11:19.585 Starting 1 thread 00:11:22.868 00:11:22.868 test: (groupid=0, jobs=1): err= 0: pid=66083: Mon Jun 10 09:49:16 2024 00:11:22.868 read: IOPS=16.7k, BW=65.1MiB/s (68.2MB/s)(130MiB/2001msec) 00:11:22.868 slat (nsec): min=4554, max=48481, avg=5848.41, stdev=1566.91 00:11:22.868 clat (usec): min=243, max=9898, avg=3821.88, stdev=501.61 00:11:22.868 lat (usec): min=249, max=9947, avg=3827.72, stdev=502.15 00:11:22.868 clat percentiles (usec): 00:11:22.868 | 1.00th=[ 2802], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3523], 00:11:22.868 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3851], 00:11:22.868 | 70.00th=[ 3949], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:11:22.868 | 99.00th=[ 4752], 99.50th=[ 6652], 99.90th=[ 7898], 99.95th=[ 8586], 00:11:22.868 | 99.99th=[ 9765] 00:11:22.868 bw ( KiB/s): min=60496, max=67568, per=96.72%, avg=64429.33, stdev=3602.35, samples=3 00:11:22.868 iops : min=15124, max=16892, avg=16107.33, stdev=900.59, samples=3 00:11:22.868 write: IOPS=16.7k, BW=65.2MiB/s (68.3MB/s)(130MiB/2001msec); 0 zone resets 00:11:22.868 slat (nsec): min=4594, max=66291, avg=5949.84, stdev=1609.72 00:11:22.868 clat (usec): min=233, max=9834, avg=3827.63, stdev=492.24 00:11:22.868 lat (usec): min=238, max=9846, avg=3833.58, stdev=492.81 00:11:22.868 clat percentiles (usec): 00:11:22.868 | 1.00th=[ 2835], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3523], 00:11:22.868 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3851], 00:11:22.868 | 70.00th=[ 3949], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:11:22.868 | 99.00th=[ 4686], 99.50th=[ 6456], 99.90th=[ 7963], 99.95th=[ 8848], 00:11:22.868 | 99.99th=[ 9765] 00:11:22.868 bw ( KiB/s): min=59944, max=67736, per=96.17%, avg=64181.33, stdev=3940.60, samples=3 00:11:22.868 iops : min=14986, max=16934, avg=16045.33, stdev=985.15, samples=3 00:11:22.868 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:22.868 lat (msec) : 2=0.17%, 4=72.63%, 10=27.16% 00:11:22.868 cpu : usr=98.75%, sys=0.30%, ctx=6, majf=0, minf=606 00:11:22.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:22.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.868 issued rwts: total=33324,33386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.868 00:11:22.868 Run status group 0 (all jobs): 00:11:22.868 READ: bw=65.1MiB/s (68.2MB/s), 65.1MiB/s-65.1MiB/s (68.2MB/s-68.2MB/s), io=130MiB (136MB), run=2001-2001msec 00:11:22.868 WRITE: bw=65.2MiB/s (68.3MB/s), 65.2MiB/s-65.2MiB/s (68.3MB/s-68.3MB/s), io=130MiB (137MB), run=2001-2001msec 00:11:23.127 ----------------------------------------------------- 00:11:23.127 Suppressions used: 00:11:23.127 count bytes template 00:11:23.127 1 32 /usr/src/fio/parse.c 00:11:23.127 1 8 libtcmalloc_minimal.so 00:11:23.127 ----------------------------------------------------- 00:11:23.127 00:11:23.127 09:49:16 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:23.127 09:49:16 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:23.127 09:49:16 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:23.127 09:49:16 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:23.386 09:49:16 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:23.386 09:49:16 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:23.645 09:49:17 -- nvme/nvme.sh@41 -- # bs=4096 00:11:23.645 09:49:17 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:23.645 09:49:17 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:23.645 09:49:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:23.645 09:49:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:23.645 09:49:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:23.645 09:49:17 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:23.646 09:49:17 -- common/autotest_common.sh@1320 -- # shift 00:11:23.646 09:49:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:23.646 09:49:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:23.646 09:49:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:23.646 09:49:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:23.646 09:49:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:23.646 09:49:17 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:23.646 09:49:17 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:23.646 09:49:17 -- common/autotest_common.sh@1326 -- # break 00:11:23.646 09:49:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:23.646 09:49:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:23.904 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:23.904 fio-3.35 00:11:23.904 Starting 1 thread 00:11:27.206 00:11:27.206 test: (groupid=0, jobs=1): err= 0: pid=66149: Mon Jun 10 09:49:20 2024 00:11:27.206 read: IOPS=16.9k, BW=66.0MiB/s (69.3MB/s)(132MiB/2001msec) 00:11:27.206 slat (nsec): min=4294, max=54285, avg=5688.01, stdev=1690.46 00:11:27.206 clat (usec): min=265, max=9146, avg=3759.53, stdev=377.21 00:11:27.206 lat (usec): min=282, max=9200, avg=3765.22, stdev=377.67 00:11:27.206 clat percentiles (usec): 00:11:27.206 | 1.00th=[ 3228], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3523], 00:11:27.206 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:11:27.206 | 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4178], 95.00th=[ 4293], 00:11:27.206 | 99.00th=[ 4817], 99.50th=[ 5538], 99.90th=[ 7439], 99.95th=[ 7701], 00:11:27.206 | 99.99th=[ 8979] 00:11:27.206 bw ( KiB/s): min=63024, max=68664, per=98.16%, avg=66384.00, stdev=2971.06, samples=3 00:11:27.206 iops : min=15756, max=17166, avg=16596.00, stdev=742.77, samples=3 00:11:27.206 write: IOPS=17.0k, BW=66.2MiB/s (69.4MB/s)(133MiB/2001msec); 0 zone resets 00:11:27.206 slat (nsec): min=4444, max=33306, avg=5840.85, stdev=1662.42 00:11:27.206 clat (usec): min=302, max=8978, avg=3771.68, stdev=390.52 00:11:27.206 lat (usec): min=308, max=8990, avg=3777.53, stdev=390.96 00:11:27.206 clat percentiles (usec): 00:11:27.206 | 1.00th=[ 3228], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3523], 00:11:27.206 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:11:27.206 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4178], 95.00th=[ 4293], 00:11:27.206 | 99.00th=[ 4948], 99.50th=[ 5735], 99.90th=[ 7504], 99.95th=[ 7767], 00:11:27.206 | 99.99th=[ 8717] 00:11:27.206 bw ( KiB/s): min=62840, max=68472, per=97.77%, avg=66304.00, stdev=3031.43, samples=3 00:11:27.207 iops : min=15710, max=17118, avg=16576.00, stdev=757.86, samples=3 00:11:27.207 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:27.207 lat (msec) : 2=0.05%, 4=80.53%, 10=19.38% 00:11:27.207 cpu : usr=99.00%, sys=0.05%, ctx=4, majf=0, minf=607 00:11:27.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:27.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.207 issued rwts: total=33831,33924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.207 00:11:27.207 Run status group 0 (all jobs): 00:11:27.207 READ: bw=66.0MiB/s (69.3MB/s), 66.0MiB/s-66.0MiB/s (69.3MB/s-69.3MB/s), io=132MiB (139MB), run=2001-2001msec 00:11:27.207 WRITE: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=133MiB (139MB), run=2001-2001msec 00:11:27.207 ----------------------------------------------------- 00:11:27.207 Suppressions used: 00:11:27.207 count bytes template 00:11:27.207 1 32 /usr/src/fio/parse.c 00:11:27.207 1 8 libtcmalloc_minimal.so 00:11:27.207 ----------------------------------------------------- 00:11:27.207 00:11:27.207 09:49:20 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:27.207 09:49:20 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:27.207 09:49:20 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:27.207 09:49:20 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:27.465 09:49:21 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:27.465 09:49:21 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:27.724 09:49:21 -- nvme/nvme.sh@41 -- # bs=4096 00:11:27.724 09:49:21 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:27.724 09:49:21 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:27.724 09:49:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:27.724 09:49:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:27.724 09:49:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:27.724 09:49:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:27.724 09:49:21 -- common/autotest_common.sh@1320 -- # shift 00:11:27.724 09:49:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:27.724 09:49:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:27.724 09:49:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:27.724 09:49:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:27.724 09:49:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:27.724 09:49:21 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:27.724 09:49:21 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:27.724 09:49:21 -- common/autotest_common.sh@1326 -- # break 00:11:27.724 09:49:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:27.724 09:49:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:27.983 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:27.983 fio-3.35 00:11:27.983 Starting 1 thread 00:11:32.183 00:11:32.183 test: (groupid=0, jobs=1): err= 0: pid=66215: Mon Jun 10 09:49:25 2024 00:11:32.183 read: IOPS=16.9k, BW=66.2MiB/s (69.4MB/s)(132MiB/2001msec) 00:11:32.183 slat (nsec): min=4524, max=65373, avg=5589.77, stdev=1608.73 00:11:32.183 clat (usec): min=254, max=10651, avg=3752.12, stdev=418.24 00:11:32.183 lat (usec): min=260, max=10704, avg=3757.71, stdev=418.83 00:11:32.183 clat percentiles (usec): 00:11:32.183 | 1.00th=[ 3294], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:11:32.183 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3720], 00:11:32.183 | 70.00th=[ 3785], 80.00th=[ 3851], 90.00th=[ 3949], 95.00th=[ 4490], 00:11:32.183 | 99.00th=[ 5211], 99.50th=[ 6259], 99.90th=[ 7504], 99.95th=[ 9110], 00:11:32.183 | 99.99th=[10421] 00:11:32.183 bw ( KiB/s): min=63448, max=69256, per=99.02%, avg=67117.33, stdev=3192.24, samples=3 00:11:32.183 iops : min=15862, max=17314, avg=16779.33, stdev=798.06, samples=3 00:11:32.183 write: IOPS=17.0k, BW=66.4MiB/s (69.6MB/s)(133MiB/2001msec); 0 zone resets 00:11:32.183 slat (nsec): min=4643, max=65263, avg=5707.40, stdev=1628.85 00:11:32.183 clat (usec): min=314, max=10483, avg=3762.53, stdev=414.86 00:11:32.183 lat (usec): min=320, max=10494, avg=3768.23, stdev=415.47 00:11:32.183 clat percentiles (usec): 00:11:32.183 | 1.00th=[ 3294], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:11:32.183 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3720], 00:11:32.183 | 70.00th=[ 3785], 80.00th=[ 3851], 90.00th=[ 3982], 95.00th=[ 4555], 00:11:32.183 | 99.00th=[ 5211], 99.50th=[ 6194], 99.90th=[ 7570], 99.95th=[ 9241], 00:11:32.183 | 99.99th=[10290] 00:11:32.183 bw ( KiB/s): min=63704, max=69016, per=98.67%, avg=67056.00, stdev=2916.78, samples=3 00:11:32.183 iops : min=15926, max=17254, avg=16764.00, stdev=729.19, samples=3 00:11:32.183 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:11:32.183 lat (msec) : 2=0.05%, 4=91.12%, 10=8.76%, 20=0.03% 00:11:32.183 cpu : usr=99.15%, sys=0.05%, ctx=3, majf=0, minf=605 00:11:32.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:32.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:32.183 issued rwts: total=33907,33998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:32.183 00:11:32.183 Run status group 0 (all jobs): 00:11:32.183 READ: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=132MiB (139MB), run=2001-2001msec 00:11:32.183 WRITE: bw=66.4MiB/s (69.6MB/s), 66.4MiB/s-66.4MiB/s (69.6MB/s-69.6MB/s), io=133MiB (139MB), run=2001-2001msec 00:11:32.442 ----------------------------------------------------- 00:11:32.442 Suppressions used: 00:11:32.442 count bytes template 00:11:32.442 1 32 /usr/src/fio/parse.c 00:11:32.442 1 8 libtcmalloc_minimal.so 00:11:32.442 ----------------------------------------------------- 00:11:32.442 00:11:32.442 09:49:26 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:32.442 09:49:26 -- nvme/nvme.sh@46 -- # true 00:11:32.442 00:11:32.442 real 0m17.603s 00:11:32.442 user 0m13.938s 00:11:32.442 sys 0m3.046s 00:11:32.442 ************************************ 00:11:32.442 END TEST nvme_fio 00:11:32.442 ************************************ 00:11:32.442 09:49:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.442 09:49:26 -- common/autotest_common.sh@10 -- # set +x 00:11:32.442 ************************************ 00:11:32.442 END TEST nvme 00:11:32.442 ************************************ 00:11:32.442 00:11:32.442 real 1m35.287s 00:11:32.442 user 3m48.258s 00:11:32.442 sys 0m15.299s 00:11:32.442 09:49:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.442 09:49:26 -- common/autotest_common.sh@10 -- # set +x 00:11:32.442 09:49:26 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:32.442 09:49:26 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:32.442 09:49:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:32.442 09:49:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.442 09:49:26 -- common/autotest_common.sh@10 -- # set +x 00:11:32.442 ************************************ 00:11:32.442 START TEST nvme_scc 00:11:32.442 ************************************ 00:11:32.442 09:49:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:32.701 * Looking for test storage... 00:11:32.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:32.701 09:49:26 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:32.701 09:49:26 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:32.701 09:49:26 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:32.701 09:49:26 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:32.701 09:49:26 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.701 09:49:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.701 09:49:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.701 09:49:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.701 09:49:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.701 09:49:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.701 09:49:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.701 09:49:26 -- paths/export.sh@5 -- # export PATH 00:11:32.701 09:49:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.701 09:49:26 -- nvme/functions.sh@10 -- # ctrls=() 00:11:32.701 09:49:26 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:32.701 09:49:26 -- nvme/functions.sh@11 -- # nvmes=() 00:11:32.701 09:49:26 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:32.701 09:49:26 -- nvme/functions.sh@12 -- # bdfs=() 00:11:32.701 09:49:26 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:32.701 09:49:26 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:32.701 09:49:26 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:32.701 09:49:26 -- nvme/functions.sh@14 -- # nvme_name= 00:11:32.701 09:49:26 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.701 09:49:26 -- nvme/nvme_scc.sh@12 -- # uname 00:11:32.701 09:49:26 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:32.701 09:49:26 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:32.701 09:49:26 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:32.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.218 Waiting for block devices as requested 00:11:33.218 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.218 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.218 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.477 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:38.820 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:38.820 09:49:32 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:38.820 09:49:32 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:38.820 09:49:32 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.820 09:49:32 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:38.820 09:49:32 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:38.820 09:49:32 -- scripts/common.sh@15 -- # local i 00:11:38.820 09:49:32 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:38.820 09:49:32 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.820 09:49:32 -- scripts/common.sh@24 -- # return 0 00:11:38.820 09:49:32 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:38.820 09:49:32 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:38.820 09:49:32 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@18 -- # shift 00:11:38.820 09:49:32 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.820 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.820 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.820 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.821 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:38.821 09:49:32 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:38.821 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.822 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:38.822 09:49:32 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.822 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:38.823 09:49:32 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:38.823 09:49:32 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:38.823 09:49:32 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:38.823 09:49:32 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:38.823 09:49:32 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.823 09:49:32 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:38.823 09:49:32 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:38.823 09:49:32 -- scripts/common.sh@15 -- # local i 00:11:38.823 09:49:32 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:38.823 09:49:32 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.823 09:49:32 -- scripts/common.sh@24 -- # return 0 00:11:38.823 09:49:32 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:38.823 09:49:32 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:38.823 09:49:32 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@18 -- # shift 00:11:38.823 09:49:32 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.823 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:38.823 09:49:32 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:38.823 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.824 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.824 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.824 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.825 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.825 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.825 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:38.826 09:49:32 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.826 09:49:32 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:38.826 09:49:32 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:38.826 09:49:32 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@18 -- # shift 00:11:38.826 09:49:32 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.826 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:38.826 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:38.826 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:38.827 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.827 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.827 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:38.828 09:49:32 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.828 09:49:32 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:38.828 09:49:32 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:38.828 09:49:32 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@18 -- # shift 00:11:38.828 09:49:32 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.828 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:38.828 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.828 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:38.829 09:49:32 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.829 09:49:32 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:38.829 09:49:32 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:38.829 09:49:32 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@18 -- # shift 00:11:38.829 09:49:32 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.829 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:38.829 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.829 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.830 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.830 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.830 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:38.831 09:49:32 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:38.831 09:49:32 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:38.831 09:49:32 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:38.831 09:49:32 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:38.831 09:49:32 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.831 09:49:32 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:38.831 09:49:32 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:38.831 09:49:32 -- scripts/common.sh@15 -- # local i 00:11:38.831 09:49:32 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:38.831 09:49:32 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.831 09:49:32 -- scripts/common.sh@24 -- # return 0 00:11:38.831 09:49:32 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:38.831 09:49:32 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:38.831 09:49:32 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@18 -- # shift 00:11:38.831 09:49:32 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.831 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.831 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:38.831 09:49:32 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.832 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.832 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:38.832 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:38.833 09:49:32 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.833 09:49:32 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:38.833 09:49:32 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:38.833 09:49:32 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@18 -- # shift 00:11:38.833 09:49:32 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.833 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.833 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.833 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.834 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.834 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.834 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:38.835 09:49:32 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:38.835 09:49:32 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:38.835 09:49:32 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:38.835 09:49:32 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:38.835 09:49:32 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.835 09:49:32 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:38.835 09:49:32 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:38.835 09:49:32 -- scripts/common.sh@15 -- # local i 00:11:38.835 09:49:32 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:38.835 09:49:32 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.835 09:49:32 -- scripts/common.sh@24 -- # return 0 00:11:38.835 09:49:32 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:38.835 09:49:32 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:38.835 09:49:32 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@18 -- # shift 00:11:38.835 09:49:32 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:38.835 09:49:32 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.835 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.835 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.836 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:38.836 09:49:32 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:38.836 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.097 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.097 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.097 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.097 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.097 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.097 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.097 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.097 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:39.097 09:49:32 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.097 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.098 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:39.098 09:49:32 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:39.098 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:39.099 09:49:32 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.099 09:49:32 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:39.099 09:49:32 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:39.099 09:49:32 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@18 -- # shift 00:11:39.099 09:49:32 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:39.099 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.099 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.099 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.100 09:49:32 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.100 09:49:32 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.100 09:49:32 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:39.100 09:49:32 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:39.100 09:49:32 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:39.100 09:49:32 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:39.100 09:49:32 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:39.100 09:49:32 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:39.100 09:49:32 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:39.100 09:49:32 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:39.100 09:49:32 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:39.100 09:49:32 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:39.100 09:49:32 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:39.100 09:49:32 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:39.100 09:49:32 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:39.100 09:49:32 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:39.100 09:49:32 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.100 09:49:32 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:39.100 09:49:32 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:39.100 09:49:32 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:39.100 09:49:32 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:39.100 09:49:32 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:39.100 09:49:32 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:39.100 09:49:32 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:39.101 09:49:32 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:39.101 09:49:32 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:39.101 09:49:32 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:39.101 09:49:32 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:39.101 09:49:32 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:39.101 09:49:32 -- nvme/functions.sh@197 -- # echo nvme1 00:11:39.101 09:49:32 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.101 09:49:32 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:39.101 09:49:32 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:39.101 09:49:32 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:39.101 09:49:32 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:39.101 09:49:32 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:39.101 09:49:32 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:39.101 09:49:32 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:39.101 09:49:32 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:39.101 09:49:32 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:39.101 09:49:32 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:39.101 09:49:32 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:39.101 09:49:32 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:39.101 09:49:32 -- nvme/functions.sh@197 -- # echo nvme0 00:11:39.101 09:49:32 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.101 09:49:32 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:39.101 09:49:32 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:39.101 09:49:32 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:39.101 09:49:32 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:39.101 09:49:32 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:39.101 09:49:32 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:39.101 09:49:32 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:39.101 09:49:32 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:39.101 09:49:32 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:39.101 09:49:32 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:39.101 09:49:32 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:39.101 09:49:32 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:39.101 09:49:32 -- nvme/functions.sh@197 -- # echo nvme3 00:11:39.101 09:49:32 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.101 09:49:32 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:39.101 09:49:32 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:39.101 09:49:32 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:39.101 09:49:32 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:39.101 09:49:32 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:39.101 09:49:32 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:39.101 09:49:32 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:39.101 09:49:32 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:39.101 09:49:32 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:39.101 09:49:32 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:39.101 09:49:32 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:39.101 09:49:32 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:39.101 09:49:32 -- nvme/functions.sh@197 -- # echo nvme2 00:11:39.101 09:49:32 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:39.101 09:49:32 -- nvme/functions.sh@206 -- # echo nvme1 00:11:39.101 09:49:32 -- nvme/functions.sh@207 -- # return 0 00:11:39.101 09:49:32 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:39.101 09:49:32 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:39.101 09:49:32 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:40.037 lsblk: /dev/nvme0c0n1: not a block device 00:11:40.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.295 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.295 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.295 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.295 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.295 09:49:34 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:40.295 09:49:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:40.295 09:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.295 09:49:34 -- common/autotest_common.sh@10 -- # set +x 00:11:40.295 ************************************ 00:11:40.295 START TEST nvme_simple_copy 00:11:40.295 ************************************ 00:11:40.295 09:49:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:40.864 Initializing NVMe Controllers 00:11:40.864 Attaching to 0000:00:08.0 00:11:40.864 Controller supports SCC. Attached to 0000:00:08.0 00:11:40.864 Namespace ID: 1 size: 4GB 00:11:40.864 Initialization complete. 00:11:40.864 00:11:40.864 Controller QEMU NVMe Ctrl (12342 ) 00:11:40.864 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:40.864 Namespace Block Size:4096 00:11:40.864 Writing LBAs 0 to 63 with Random Data 00:11:40.864 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:40.864 LBAs matching Written Data: 64 00:11:40.864 ************************************ 00:11:40.864 END TEST nvme_simple_copy 00:11:40.864 ************************************ 00:11:40.864 00:11:40.864 real 0m0.309s 00:11:40.864 user 0m0.122s 00:11:40.864 sys 0m0.084s 00:11:40.864 09:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.864 09:49:34 -- common/autotest_common.sh@10 -- # set +x 00:11:40.864 ************************************ 00:11:40.864 END TEST nvme_scc 00:11:40.864 ************************************ 00:11:40.864 00:11:40.864 real 0m8.240s 00:11:40.864 user 0m1.395s 00:11:40.864 sys 0m1.771s 00:11:40.864 09:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.864 09:49:34 -- common/autotest_common.sh@10 -- # set +x 00:11:40.864 09:49:34 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:11:40.864 09:49:34 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:11:40.864 09:49:34 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:11:40.864 09:49:34 -- spdk/autotest.sh@238 -- # [[ 1 -eq 1 ]] 00:11:40.864 09:49:34 -- spdk/autotest.sh@239 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:40.864 09:49:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:40.864 09:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.864 09:49:34 -- common/autotest_common.sh@10 -- # set +x 00:11:40.864 ************************************ 00:11:40.864 START TEST nvme_fdp 00:11:40.864 ************************************ 00:11:40.864 09:49:34 -- common/autotest_common.sh@1104 -- # test/nvme/nvme_fdp.sh 00:11:40.864 * Looking for test storage... 00:11:40.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:40.864 09:49:34 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:40.864 09:49:34 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:40.864 09:49:34 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:40.864 09:49:34 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:40.864 09:49:34 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.864 09:49:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.864 09:49:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.864 09:49:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.864 09:49:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.864 09:49:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.864 09:49:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.864 09:49:34 -- paths/export.sh@5 -- # export PATH 00:11:40.864 09:49:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.864 09:49:34 -- nvme/functions.sh@10 -- # ctrls=() 00:11:40.864 09:49:34 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:40.864 09:49:34 -- nvme/functions.sh@11 -- # nvmes=() 00:11:40.864 09:49:34 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:40.864 09:49:34 -- nvme/functions.sh@12 -- # bdfs=() 00:11:40.864 09:49:34 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:40.864 09:49:34 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:40.864 09:49:34 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:40.864 09:49:34 -- nvme/functions.sh@14 -- # nvme_name= 00:11:40.864 09:49:34 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:40.864 09:49:34 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:41.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:41.431 Waiting for block devices as requested 00:11:41.431 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:41.431 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:41.690 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:41.690 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:46.964 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:46.964 09:49:40 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:46.964 09:49:40 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:46.964 09:49:40 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:46.964 09:49:40 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:46.964 09:49:40 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:46.964 09:49:40 -- scripts/common.sh@15 -- # local i 00:11:46.964 09:49:40 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:46.964 09:49:40 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:46.964 09:49:40 -- scripts/common.sh@24 -- # return 0 00:11:46.964 09:49:40 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:46.964 09:49:40 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:46.964 09:49:40 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@18 -- # shift 00:11:46.964 09:49:40 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.964 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.964 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.964 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.965 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.965 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:46.965 09:49:40 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:46.966 09:49:40 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.966 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.966 09:49:40 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:46.966 09:49:40 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:46.966 09:49:40 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:46.966 09:49:40 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:46.967 09:49:40 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:46.967 09:49:40 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:46.967 09:49:40 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:46.967 09:49:40 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:46.967 09:49:40 -- scripts/common.sh@15 -- # local i 00:11:46.967 09:49:40 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:46.967 09:49:40 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:46.967 09:49:40 -- scripts/common.sh@24 -- # return 0 00:11:46.967 09:49:40 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:46.967 09:49:40 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:46.967 09:49:40 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@18 -- # shift 00:11:46.967 09:49:40 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.967 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:46.967 09:49:40 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:46.967 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.968 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.968 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.968 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.969 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.969 09:49:40 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:46.969 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:46.970 09:49:40 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:46.970 09:49:40 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:46.970 09:49:40 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:46.970 09:49:40 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@18 -- # shift 00:11:46.970 09:49:40 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:46.970 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.970 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.970 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:46.971 09:49:40 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:46.971 09:49:40 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:46.971 09:49:40 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:46.971 09:49:40 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@18 -- # shift 00:11:46.971 09:49:40 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.971 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.971 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:46.971 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:46.972 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.972 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.972 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.973 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.973 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.973 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.973 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.973 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:46.973 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:46.973 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.973 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.236 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.236 09:49:40 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:47.236 09:49:40 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.236 09:49:40 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:47.236 09:49:40 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:47.236 09:49:40 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:47.236 09:49:40 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:47.236 09:49:40 -- nvme/functions.sh@18 -- # shift 00:11:47.236 09:49:40 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.236 09:49:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:47.236 09:49:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.236 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.236 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.236 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.236 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.236 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.236 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.236 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.237 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:47.237 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:47.237 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:47.238 09:49:40 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:47.238 09:49:40 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:47.238 09:49:40 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:47.238 09:49:40 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:47.238 09:49:40 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.238 09:49:40 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:47.238 09:49:40 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:47.238 09:49:40 -- scripts/common.sh@15 -- # local i 00:11:47.238 09:49:40 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:47.238 09:49:40 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.238 09:49:40 -- scripts/common.sh@24 -- # return 0 00:11:47.238 09:49:40 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:47.238 09:49:40 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:47.238 09:49:40 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@18 -- # shift 00:11:47.238 09:49:40 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.238 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.238 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:47.238 09:49:40 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:47.239 09:49:40 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.239 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.239 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.240 09:49:40 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:47.240 09:49:40 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.240 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:47.241 09:49:40 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.241 09:49:40 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:47.241 09:49:40 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:47.241 09:49:40 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@18 -- # shift 00:11:47.241 09:49:40 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.241 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.241 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.241 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.242 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.242 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.242 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:47.243 09:49:40 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:47.243 09:49:40 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:47.243 09:49:40 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:47.243 09:49:40 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:47.243 09:49:40 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.243 09:49:40 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:47.243 09:49:40 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:47.243 09:49:40 -- scripts/common.sh@15 -- # local i 00:11:47.243 09:49:40 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:47.243 09:49:40 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.243 09:49:40 -- scripts/common.sh@24 -- # return 0 00:11:47.243 09:49:40 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:47.243 09:49:40 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:47.243 09:49:40 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@18 -- # shift 00:11:47.243 09:49:40 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.243 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.243 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.243 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.244 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.244 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:47.244 09:49:40 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:47.245 09:49:40 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:47.245 09:49:40 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.245 09:49:40 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:47.245 09:49:40 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:47.245 09:49:40 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@18 -- # shift 00:11:47.245 09:49:40 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.245 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.245 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.246 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.246 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:47.246 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.247 09:49:40 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # IFS=: 00:11:47.247 09:49:40 -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.247 09:49:40 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:47.247 09:49:40 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:47.247 09:49:40 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:47.247 09:49:40 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:47.247 09:49:40 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:47.247 09:49:40 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:47.247 09:49:40 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:47.247 09:49:40 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:47.247 09:49:40 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:47.247 09:49:40 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:47.247 09:49:40 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:47.247 09:49:40 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:47.247 09:49:40 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:47.247 09:49:40 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.247 09:49:40 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:47.247 09:49:40 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:47.247 09:49:40 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:47.247 09:49:40 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:47.247 09:49:40 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:47.247 09:49:40 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:47.247 09:49:40 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.247 09:49:40 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.247 09:49:40 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:47.247 09:49:40 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:47.247 09:49:40 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:47.247 09:49:40 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:47.247 09:49:40 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:47.247 09:49:40 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:47.247 09:49:40 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.247 09:49:40 -- nvme/functions.sh@197 -- # echo nvme0 00:11:47.247 09:49:40 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.247 09:49:40 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:47.247 09:49:40 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:47.247 09:49:40 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:47.247 09:49:40 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:47.247 09:49:40 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:47.247 09:49:40 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:47.247 09:49:40 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.247 09:49:40 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.247 09:49:40 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:47.247 09:49:40 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:47.247 09:49:40 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:47.247 09:49:40 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:47.247 09:49:40 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:47.247 09:49:40 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:47.247 09:49:40 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:47.247 09:49:40 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:47.247 09:49:40 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.247 09:49:40 -- nvme/functions.sh@204 -- # trap - ERR 00:11:47.247 09:49:40 -- nvme/functions.sh@204 -- # print_backtrace 00:11:47.247 09:49:40 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:11:47.247 09:49:40 -- common/autotest_common.sh@1132 -- # return 0 00:11:47.247 09:49:40 -- nvme/functions.sh@204 -- # trap - ERR 00:11:47.247 09:49:40 -- nvme/functions.sh@204 -- # print_backtrace 00:11:47.247 09:49:40 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:11:47.247 09:49:40 -- common/autotest_common.sh@1132 -- # return 0 00:11:47.247 09:49:40 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:47.247 09:49:40 -- nvme/functions.sh@206 -- # echo nvme0 00:11:47.247 09:49:40 -- nvme/functions.sh@207 -- # return 0 00:11:47.248 09:49:40 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:47.248 09:49:40 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:47.248 09:49:40 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:48.184 lsblk: /dev/nvme0c0n1: not a block device 00:11:48.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:48.443 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:48.443 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:48.443 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:48.701 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:48.701 09:49:42 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:48.701 09:49:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:48.701 09:49:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.701 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:11:48.701 ************************************ 00:11:48.701 START TEST nvme_flexible_data_placement 00:11:48.701 ************************************ 00:11:48.701 09:49:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:48.959 Initializing NVMe Controllers 00:11:48.959 Attaching to 0000:00:09.0 00:11:48.959 Controller supports FDP Attached to 0000:00:09.0 00:11:48.959 Namespace ID: 1 Endurance Group ID: 1 00:11:48.959 Initialization complete. 00:11:48.959 00:11:48.959 ================================== 00:11:48.959 == FDP tests for Namespace: #01 == 00:11:48.959 ================================== 00:11:48.959 00:11:48.959 Get Feature: FDP: 00:11:48.959 ================= 00:11:48.959 Enabled: Yes 00:11:48.959 FDP configuration Index: 0 00:11:48.959 00:11:48.959 FDP configurations log page 00:11:48.959 =========================== 00:11:48.959 Number of FDP configurations: 1 00:11:48.959 Version: 0 00:11:48.959 Size: 112 00:11:48.959 FDP Configuration Descriptor: 0 00:11:48.959 Descriptor Size: 96 00:11:48.959 Reclaim Group Identifier format: 2 00:11:48.959 FDP Volatile Write Cache: Not Present 00:11:48.959 FDP Configuration: Valid 00:11:48.959 Vendor Specific Size: 0 00:11:48.959 Number of Reclaim Groups: 2 00:11:48.959 Number of Recalim Unit Handles: 8 00:11:48.959 Max Placement Identifiers: 128 00:11:48.959 Number of Namespaces Suppprted: 256 00:11:48.959 Reclaim unit Nominal Size: 6000000 bytes 00:11:48.959 Estimated Reclaim Unit Time Limit: Not Reported 00:11:48.959 RUH Desc #000: RUH Type: Initially Isolated 00:11:48.959 RUH Desc #001: RUH Type: Initially Isolated 00:11:48.959 RUH Desc #002: RUH Type: Initially Isolated 00:11:48.959 RUH Desc #003: RUH Type: Initially Isolated 00:11:48.959 RUH Desc #004: RUH Type: Initially Isolated 00:11:48.959 RUH Desc #005: RUH Type: Initially Isolated 00:11:48.959 RUH Desc #006: RUH Type: Initially Isolated 00:11:48.959 RUH Desc #007: RUH Type: Initially Isolated 00:11:48.959 00:11:48.959 FDP reclaim unit handle usage log page 00:11:48.959 ====================================== 00:11:48.959 Number of Reclaim Unit Handles: 8 00:11:48.959 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:48.959 RUH Usage Desc #001: RUH Attributes: Unused 00:11:48.959 RUH Usage Desc #002: RUH Attributes: Unused 00:11:48.959 RUH Usage Desc #003: RUH Attributes: Unused 00:11:48.959 RUH Usage Desc #004: RUH Attributes: Unused 00:11:48.959 RUH Usage Desc #005: RUH Attributes: Unused 00:11:48.959 RUH Usage Desc #006: RUH Attributes: Unused 00:11:48.959 RUH Usage Desc #007: RUH Attributes: Unused 00:11:48.959 00:11:48.959 FDP statistics log page 00:11:48.959 ======================= 00:11:48.959 Host bytes with metadata written: 791519232 00:11:48.959 Media bytes with metadata written: 791699456 00:11:48.959 Media bytes erased: 0 00:11:48.959 00:11:48.959 FDP Reclaim unit handle status 00:11:48.959 ============================== 00:11:48.959 Number of RUHS descriptors: 2 00:11:48.959 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000d26 00:11:48.959 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:48.959 00:11:48.959 FDP write on placement id: 0 success 00:11:48.959 00:11:48.959 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:48.959 00:11:48.959 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:48.959 00:11:48.959 Get Feature: FDP Events for Placement handle: #0 00:11:48.959 ======================== 00:11:48.959 Number of FDP Events: 6 00:11:48.959 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:48.959 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:48.959 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:48.959 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:48.960 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:48.960 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:48.960 00:11:48.960 FDP events log page 00:11:48.960 =================== 00:11:48.960 Number of FDP events: 1 00:11:48.960 FDP Event #0: 00:11:48.960 Event Type: RU Not Written to Capacity 00:11:48.960 Placement Identifier: Valid 00:11:48.960 NSID: Valid 00:11:48.960 Location: Valid 00:11:48.960 Placement Identifier: 0 00:11:48.960 Event Timestamp: c 00:11:48.960 Namespace Identifier: 1 00:11:48.960 Reclaim Group Identifier: 0 00:11:48.960 Reclaim Unit Handle Identifier: 0 00:11:48.960 00:11:48.960 FDP test passed 00:11:48.960 00:11:48.960 real 0m0.281s 00:11:48.960 user 0m0.096s 00:11:48.960 sys 0m0.084s 00:11:48.960 09:49:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.960 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:11:48.960 ************************************ 00:11:48.960 END TEST nvme_flexible_data_placement 00:11:48.960 ************************************ 00:11:48.960 00:11:48.960 real 0m8.193s 00:11:48.960 user 0m1.384s 00:11:48.960 sys 0m1.830s 00:11:48.960 09:49:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.960 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:11:48.960 ************************************ 00:11:48.960 END TEST nvme_fdp 00:11:48.960 ************************************ 00:11:48.960 09:49:42 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:11:48.960 09:49:42 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:48.960 09:49:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:48.960 09:49:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.960 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:11:48.960 ************************************ 00:11:48.960 START TEST nvme_rpc 00:11:48.960 ************************************ 00:11:48.960 09:49:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:49.247 * Looking for test storage... 00:11:49.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:49.247 09:49:42 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.247 09:49:42 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:49.247 09:49:42 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:49.247 09:49:42 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:49.247 09:49:42 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:49.247 09:49:42 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:49.247 09:49:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:49.247 09:49:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:49.247 09:49:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:49.247 09:49:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:49.247 09:49:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:49.247 09:49:42 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:49.247 09:49:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:49.247 09:49:42 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:11:49.247 09:49:42 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:49.247 09:49:42 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67660 00:11:49.247 09:49:42 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:49.247 09:49:42 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67660 00:11:49.247 09:49:42 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:49.247 09:49:42 -- common/autotest_common.sh@819 -- # '[' -z 67660 ']' 00:11:49.247 09:49:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.247 09:49:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:49.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.247 09:49:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.247 09:49:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:49.247 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:11:49.247 [2024-06-10 09:49:42.950704] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:49.247 [2024-06-10 09:49:42.950869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67660 ] 00:11:49.506 [2024-06-10 09:49:43.121335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:49.764 [2024-06-10 09:49:43.346080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:49.764 [2024-06-10 09:49:43.346542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.764 [2024-06-10 09:49:43.346554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.140 09:49:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:51.140 09:49:44 -- common/autotest_common.sh@852 -- # return 0 00:11:51.140 09:49:44 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:51.399 Nvme0n1 00:11:51.399 09:49:44 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:51.399 09:49:44 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:51.657 request: 00:11:51.657 { 00:11:51.657 "filename": "non_existing_file", 00:11:51.657 "bdev_name": "Nvme0n1", 00:11:51.657 "method": "bdev_nvme_apply_firmware", 00:11:51.657 "req_id": 1 00:11:51.657 } 00:11:51.657 Got JSON-RPC error response 00:11:51.657 response: 00:11:51.657 { 00:11:51.657 "code": -32603, 00:11:51.657 "message": "open file failed." 00:11:51.657 } 00:11:51.657 09:49:45 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:51.657 09:49:45 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:51.657 09:49:45 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:51.916 09:49:45 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:51.916 09:49:45 -- nvme/nvme_rpc.sh@40 -- # killprocess 67660 00:11:51.916 09:49:45 -- common/autotest_common.sh@926 -- # '[' -z 67660 ']' 00:11:51.916 09:49:45 -- common/autotest_common.sh@930 -- # kill -0 67660 00:11:51.916 09:49:45 -- common/autotest_common.sh@931 -- # uname 00:11:51.916 09:49:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:51.916 09:49:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67660 00:11:51.916 09:49:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:51.916 killing process with pid 67660 00:11:51.916 09:49:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:51.916 09:49:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67660' 00:11:51.916 09:49:45 -- common/autotest_common.sh@945 -- # kill 67660 00:11:51.916 09:49:45 -- common/autotest_common.sh@950 -- # wait 67660 00:11:53.822 00:11:53.822 real 0m4.713s 00:11:53.822 user 0m9.161s 00:11:53.822 sys 0m0.644s 00:11:53.822 09:49:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.822 09:49:47 -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 ************************************ 00:11:53.822 END TEST nvme_rpc 00:11:53.822 ************************************ 00:11:53.822 09:49:47 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:53.822 09:49:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:53.822 09:49:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:53.822 09:49:47 -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 ************************************ 00:11:53.822 START TEST nvme_rpc_timeouts 00:11:53.822 ************************************ 00:11:53.822 09:49:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:53.822 * Looking for test storage... 00:11:53.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:53.822 09:49:47 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.822 09:49:47 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67744 00:11:53.822 09:49:47 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67744 00:11:53.822 09:49:47 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67767 00:11:53.822 09:49:47 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:53.822 09:49:47 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:53.822 09:49:47 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67767 00:11:53.822 09:49:47 -- common/autotest_common.sh@819 -- # '[' -z 67767 ']' 00:11:53.822 09:49:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.822 09:49:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:53.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.822 09:49:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.822 09:49:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:53.822 09:49:47 -- common/autotest_common.sh@10 -- # set +x 00:11:54.081 [2024-06-10 09:49:47.644727] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:54.081 [2024-06-10 09:49:47.644899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67767 ] 00:11:54.081 [2024-06-10 09:49:47.816514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:54.341 [2024-06-10 09:49:48.018685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:54.341 [2024-06-10 09:49:48.019118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.341 [2024-06-10 09:49:48.019163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.718 09:49:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:55.718 Checking default timeout settings: 00:11:55.718 09:49:49 -- common/autotest_common.sh@852 -- # return 0 00:11:55.718 09:49:49 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:55.718 09:49:49 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:55.977 Making settings changes with rpc: 00:11:55.977 09:49:49 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:55.977 09:49:49 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:56.235 Check default vs. modified settings: 00:11:56.235 09:49:49 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:56.235 09:49:49 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67744 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67744 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.493 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:56.494 Setting action_on_timeout is changed as expected. 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67744 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67744 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:56.494 Setting timeout_us is changed as expected. 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67744 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67744 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:56.494 Setting timeout_admin_us is changed as expected. 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67744 /tmp/settings_modified_67744 00:11:56.494 09:49:50 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67767 00:11:56.494 09:49:50 -- common/autotest_common.sh@926 -- # '[' -z 67767 ']' 00:11:56.494 09:49:50 -- common/autotest_common.sh@930 -- # kill -0 67767 00:11:56.494 09:49:50 -- common/autotest_common.sh@931 -- # uname 00:11:56.494 09:49:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:56.494 09:49:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67767 00:11:56.752 09:49:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:56.752 killing process with pid 67767 00:11:56.752 09:49:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:56.752 09:49:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67767' 00:11:56.752 09:49:50 -- common/autotest_common.sh@945 -- # kill 67767 00:11:56.752 09:49:50 -- common/autotest_common.sh@950 -- # wait 67767 00:11:58.653 RPC TIMEOUT SETTING TEST PASSED. 00:11:58.653 09:49:52 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:58.653 00:11:58.653 real 0m4.852s 00:11:58.653 user 0m9.584s 00:11:58.653 sys 0m0.562s 00:11:58.653 09:49:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.653 ************************************ 00:11:58.653 09:49:52 -- common/autotest_common.sh@10 -- # set +x 00:11:58.653 END TEST nvme_rpc_timeouts 00:11:58.653 ************************************ 00:11:58.653 09:49:52 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:11:58.653 09:49:52 -- spdk/autotest.sh@255 -- # [[ 1 -eq 1 ]] 00:11:58.653 09:49:52 -- spdk/autotest.sh@256 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:58.653 09:49:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:58.653 09:49:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.653 09:49:52 -- common/autotest_common.sh@10 -- # set +x 00:11:58.654 ************************************ 00:11:58.654 START TEST nvme_xnvme 00:11:58.654 ************************************ 00:11:58.654 09:49:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:58.913 * Looking for test storage... 00:11:58.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:58.913 09:49:52 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.913 09:49:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.913 09:49:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.913 09:49:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.913 09:49:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.913 09:49:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.913 09:49:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.913 09:49:52 -- paths/export.sh@5 -- # export PATH 00:11:58.913 09:49:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:58.913 09:49:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:58.913 09:49:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.913 09:49:52 -- common/autotest_common.sh@10 -- # set +x 00:11:58.913 ************************************ 00:11:58.913 START TEST xnvme_to_malloc_dd_copy 00:11:58.913 ************************************ 00:11:58.913 09:49:52 -- common/autotest_common.sh@1104 -- # malloc_to_xnvme_copy 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:58.913 09:49:52 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:58.913 09:49:52 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:58.913 09:49:52 -- dd/common.sh@191 -- # return 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@18 -- # local io 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:58.913 09:49:52 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:58.914 09:49:52 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:58.914 09:49:52 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:58.914 09:49:52 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:58.914 09:49:52 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:58.914 09:49:52 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:58.914 09:49:52 -- dd/common.sh@31 -- # xtrace_disable 00:11:58.914 09:49:52 -- common/autotest_common.sh@10 -- # set +x 00:11:58.914 { 00:11:58.914 "subsystems": [ 00:11:58.914 { 00:11:58.914 "subsystem": "bdev", 00:11:58.914 "config": [ 00:11:58.914 { 00:11:58.914 "params": { 00:11:58.914 "block_size": 512, 00:11:58.914 "num_blocks": 2097152, 00:11:58.914 "name": "malloc0" 00:11:58.914 }, 00:11:58.914 "method": "bdev_malloc_create" 00:11:58.914 }, 00:11:58.914 { 00:11:58.914 "params": { 00:11:58.914 "io_mechanism": "libaio", 00:11:58.914 "filename": "/dev/nullb0", 00:11:58.914 "name": "null0" 00:11:58.914 }, 00:11:58.914 "method": "bdev_xnvme_create" 00:11:58.914 }, 00:11:58.914 { 00:11:58.914 "method": "bdev_wait_for_examine" 00:11:58.914 } 00:11:58.914 ] 00:11:58.914 } 00:11:58.914 ] 00:11:58.914 } 00:11:58.914 [2024-06-10 09:49:52.557257] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:58.914 [2024-06-10 09:49:52.557412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67906 ] 00:11:59.172 [2024-06-10 09:49:52.728870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.430 [2024-06-10 09:49:52.951626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.577  Copying: 173/1024 [MB] (173 MBps) Copying: 351/1024 [MB] (178 MBps) Copying: 524/1024 [MB] (173 MBps) Copying: 695/1024 [MB] (170 MBps) Copying: 869/1024 [MB] (173 MBps) Copying: 1024/1024 [MB] (average 173 MBps) 00:12:10.577 00:12:10.577 09:50:03 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:10.577 09:50:03 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:10.577 09:50:03 -- dd/common.sh@31 -- # xtrace_disable 00:12:10.577 09:50:03 -- common/autotest_common.sh@10 -- # set +x 00:12:10.577 { 00:12:10.577 "subsystems": [ 00:12:10.577 { 00:12:10.577 "subsystem": "bdev", 00:12:10.577 "config": [ 00:12:10.577 { 00:12:10.577 "params": { 00:12:10.577 "block_size": 512, 00:12:10.577 "num_blocks": 2097152, 00:12:10.577 "name": "malloc0" 00:12:10.577 }, 00:12:10.577 "method": "bdev_malloc_create" 00:12:10.577 }, 00:12:10.577 { 00:12:10.577 "params": { 00:12:10.577 "io_mechanism": "libaio", 00:12:10.577 "filename": "/dev/nullb0", 00:12:10.577 "name": "null0" 00:12:10.577 }, 00:12:10.577 "method": "bdev_xnvme_create" 00:12:10.577 }, 00:12:10.577 { 00:12:10.577 "method": "bdev_wait_for_examine" 00:12:10.577 } 00:12:10.577 ] 00:12:10.577 } 00:12:10.577 ] 00:12:10.577 } 00:12:10.577 [2024-06-10 09:50:03.629100] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:10.577 [2024-06-10 09:50:03.629281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68032 ] 00:12:10.577 [2024-06-10 09:50:03.795323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.577 [2024-06-10 09:50:03.970423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.885  Copying: 173/1024 [MB] (173 MBps) Copying: 350/1024 [MB] (177 MBps) Copying: 519/1024 [MB] (168 MBps) Copying: 698/1024 [MB] (179 MBps) Copying: 878/1024 [MB] (179 MBps) Copying: 1024/1024 [MB] (average 176 MBps) 00:12:20.885 00:12:20.885 09:50:14 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:20.885 09:50:14 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:20.885 09:50:14 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:20.885 09:50:14 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:20.885 09:50:14 -- dd/common.sh@31 -- # xtrace_disable 00:12:20.885 09:50:14 -- common/autotest_common.sh@10 -- # set +x 00:12:20.885 { 00:12:20.885 "subsystems": [ 00:12:20.885 { 00:12:20.885 "subsystem": "bdev", 00:12:20.885 "config": [ 00:12:20.885 { 00:12:20.885 "params": { 00:12:20.885 "block_size": 512, 00:12:20.885 "num_blocks": 2097152, 00:12:20.885 "name": "malloc0" 00:12:20.885 }, 00:12:20.885 "method": "bdev_malloc_create" 00:12:20.885 }, 00:12:20.885 { 00:12:20.885 "params": { 00:12:20.885 "io_mechanism": "io_uring", 00:12:20.885 "filename": "/dev/nullb0", 00:12:20.885 "name": "null0" 00:12:20.885 }, 00:12:20.885 "method": "bdev_xnvme_create" 00:12:20.885 }, 00:12:20.885 { 00:12:20.885 "method": "bdev_wait_for_examine" 00:12:20.885 } 00:12:20.885 ] 00:12:20.885 } 00:12:20.885 ] 00:12:20.885 } 00:12:20.885 [2024-06-10 09:50:14.591186] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:20.885 [2024-06-10 09:50:14.591328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68153 ] 00:12:21.144 [2024-06-10 09:50:14.751990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.403 [2024-06-10 09:50:14.942414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.499  Copying: 181/1024 [MB] (181 MBps) Copying: 365/1024 [MB] (183 MBps) Copying: 544/1024 [MB] (179 MBps) Copying: 725/1024 [MB] (181 MBps) Copying: 902/1024 [MB] (177 MBps) Copying: 1024/1024 [MB] (average 180 MBps) 00:12:32.499 00:12:32.499 09:50:25 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:32.499 09:50:25 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:32.499 09:50:25 -- dd/common.sh@31 -- # xtrace_disable 00:12:32.499 09:50:25 -- common/autotest_common.sh@10 -- # set +x 00:12:32.499 { 00:12:32.499 "subsystems": [ 00:12:32.499 { 00:12:32.499 "subsystem": "bdev", 00:12:32.499 "config": [ 00:12:32.499 { 00:12:32.499 "params": { 00:12:32.499 "block_size": 512, 00:12:32.499 "num_blocks": 2097152, 00:12:32.499 "name": "malloc0" 00:12:32.499 }, 00:12:32.499 "method": "bdev_malloc_create" 00:12:32.499 }, 00:12:32.499 { 00:12:32.499 "params": { 00:12:32.499 "io_mechanism": "io_uring", 00:12:32.499 "filename": "/dev/nullb0", 00:12:32.499 "name": "null0" 00:12:32.499 }, 00:12:32.499 "method": "bdev_xnvme_create" 00:12:32.499 }, 00:12:32.499 { 00:12:32.499 "method": "bdev_wait_for_examine" 00:12:32.499 } 00:12:32.499 ] 00:12:32.499 } 00:12:32.499 ] 00:12:32.499 } 00:12:32.499 [2024-06-10 09:50:25.603004] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:32.499 [2024-06-10 09:50:25.603188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68279 ] 00:12:32.499 [2024-06-10 09:50:25.775709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.499 [2024-06-10 09:50:26.009330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.703  Copying: 186/1024 [MB] (186 MBps) Copying: 370/1024 [MB] (184 MBps) Copying: 556/1024 [MB] (185 MBps) Copying: 733/1024 [MB] (177 MBps) Copying: 914/1024 [MB] (180 MBps) Copying: 1024/1024 [MB] (average 183 MBps) 00:12:42.703 00:12:42.703 09:50:36 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:42.703 09:50:36 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:42.703 00:12:42.703 real 0m44.009s 00:12:42.703 user 0m38.585s 00:12:42.703 sys 0m4.804s 00:12:42.703 09:50:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.703 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:12:42.703 ************************************ 00:12:42.703 END TEST xnvme_to_malloc_dd_copy 00:12:42.703 ************************************ 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:42.962 09:50:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:42.962 09:50:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:42.962 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:12:42.962 ************************************ 00:12:42.962 START TEST xnvme_bdevperf 00:12:42.962 ************************************ 00:12:42.962 09:50:36 -- common/autotest_common.sh@1104 -- # xnvme_bdevperf 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:42.962 09:50:36 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:42.962 09:50:36 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:42.962 09:50:36 -- dd/common.sh@191 -- # return 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@60 -- # local io 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:42.962 09:50:36 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:42.962 09:50:36 -- dd/common.sh@31 -- # xtrace_disable 00:12:42.962 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:12:42.962 { 00:12:42.962 "subsystems": [ 00:12:42.962 { 00:12:42.962 "subsystem": "bdev", 00:12:42.962 "config": [ 00:12:42.962 { 00:12:42.962 "params": { 00:12:42.962 "io_mechanism": "libaio", 00:12:42.962 "filename": "/dev/nullb0", 00:12:42.962 "name": "null0" 00:12:42.962 }, 00:12:42.962 "method": "bdev_xnvme_create" 00:12:42.962 }, 00:12:42.962 { 00:12:42.962 "method": "bdev_wait_for_examine" 00:12:42.962 } 00:12:42.962 ] 00:12:42.962 } 00:12:42.962 ] 00:12:42.962 } 00:12:42.962 [2024-06-10 09:50:36.630202] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:42.962 [2024-06-10 09:50:36.630399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68428 ] 00:12:43.221 [2024-06-10 09:50:36.805136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.480 [2024-06-10 09:50:37.005809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.750 Running I/O for 5 seconds... 00:12:49.034 00:12:49.034 Latency(us) 00:12:49.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.034 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:49.034 null0 : 5.00 112186.74 438.23 0.00 0.00 567.08 172.22 845.27 00:12:49.034 =================================================================================================================== 00:12:49.034 Total : 112186.74 438.23 0.00 0.00 567.08 172.22 845.27 00:12:49.988 09:50:43 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:49.988 09:50:43 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:49.989 09:50:43 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:49.989 09:50:43 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:49.989 09:50:43 -- dd/common.sh@31 -- # xtrace_disable 00:12:49.989 09:50:43 -- common/autotest_common.sh@10 -- # set +x 00:12:49.989 { 00:12:49.989 "subsystems": [ 00:12:49.989 { 00:12:49.989 "subsystem": "bdev", 00:12:49.989 "config": [ 00:12:49.989 { 00:12:49.989 "params": { 00:12:49.989 "io_mechanism": "io_uring", 00:12:49.989 "filename": "/dev/nullb0", 00:12:49.989 "name": "null0" 00:12:49.989 }, 00:12:49.989 "method": "bdev_xnvme_create" 00:12:49.989 }, 00:12:49.989 { 00:12:49.989 "method": "bdev_wait_for_examine" 00:12:49.989 } 00:12:49.989 ] 00:12:49.989 } 00:12:49.989 ] 00:12:49.989 } 00:12:49.989 [2024-06-10 09:50:43.538447] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:49.989 [2024-06-10 09:50:43.538663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68508 ] 00:12:49.989 [2024-06-10 09:50:43.710808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.266 [2024-06-10 09:50:43.939808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.525 Running I/O for 5 seconds... 00:12:55.798 00:12:55.798 Latency(us) 00:12:55.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.798 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:55.798 null0 : 5.00 155909.08 609.02 0.00 0.00 407.39 237.38 793.13 00:12:55.798 =================================================================================================================== 00:12:55.798 Total : 155909.08 609.02 0.00 0.00 407.39 237.38 793.13 00:12:56.741 09:50:50 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:56.741 09:50:50 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:56.741 00:12:56.741 real 0m13.891s 00:12:56.741 user 0m10.904s 00:12:56.741 sys 0m2.754s 00:12:56.741 09:50:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.741 09:50:50 -- common/autotest_common.sh@10 -- # set +x 00:12:56.741 ************************************ 00:12:56.741 END TEST xnvme_bdevperf 00:12:56.741 ************************************ 00:12:56.741 00:12:56.741 real 0m58.084s 00:12:56.741 user 0m49.553s 00:12:56.741 sys 0m7.671s 00:12:56.741 09:50:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.741 09:50:50 -- common/autotest_common.sh@10 -- # set +x 00:12:56.741 ************************************ 00:12:56.741 END TEST nvme_xnvme 00:12:56.741 ************************************ 00:12:56.741 09:50:50 -- spdk/autotest.sh@257 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:56.741 09:50:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:56.741 09:50:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.741 09:50:50 -- common/autotest_common.sh@10 -- # set +x 00:12:56.741 ************************************ 00:12:56.741 START TEST blockdev_xnvme 00:12:56.741 ************************************ 00:12:56.741 09:50:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:57.000 * Looking for test storage... 00:12:57.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:57.001 09:50:50 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:57.001 09:50:50 -- bdev/nbd_common.sh@6 -- # set -e 00:12:57.001 09:50:50 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:57.001 09:50:50 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:57.001 09:50:50 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:57.001 09:50:50 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:57.001 09:50:50 -- bdev/blockdev.sh@18 -- # : 00:12:57.001 09:50:50 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:57.001 09:50:50 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:57.001 09:50:50 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:57.001 09:50:50 -- bdev/blockdev.sh@672 -- # uname -s 00:12:57.001 09:50:50 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:57.001 09:50:50 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:57.001 09:50:50 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:57.001 09:50:50 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:57.001 09:50:50 -- bdev/blockdev.sh@682 -- # dek= 00:12:57.001 09:50:50 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:57.001 09:50:50 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:57.001 09:50:50 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:57.001 09:50:50 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:57.001 09:50:50 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:57.001 09:50:50 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:57.001 09:50:50 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=68648 00:12:57.001 09:50:50 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:57.001 09:50:50 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:57.001 09:50:50 -- bdev/blockdev.sh@47 -- # waitforlisten 68648 00:12:57.001 09:50:50 -- common/autotest_common.sh@819 -- # '[' -z 68648 ']' 00:12:57.001 09:50:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.001 09:50:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:57.001 09:50:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.001 09:50:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:57.001 09:50:50 -- common/autotest_common.sh@10 -- # set +x 00:12:57.001 [2024-06-10 09:50:50.681811] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:57.001 [2024-06-10 09:50:50.682009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68648 ] 00:12:57.260 [2024-06-10 09:50:50.847865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.519 [2024-06-10 09:50:51.079448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:57.519 [2024-06-10 09:50:51.079732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.894 09:50:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:58.894 09:50:52 -- common/autotest_common.sh@852 -- # return 0 00:12:58.894 09:50:52 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:58.894 09:50:52 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:58.894 09:50:52 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:58.894 09:50:52 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:58.894 09:50:52 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:59.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:59.410 Waiting for block devices as requested 00:12:59.410 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:59.410 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:59.410 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:59.668 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.935 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:04.935 09:50:58 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:13:04.935 09:50:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:13:04.935 09:50:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:13:04.935 09:50:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:13:04.935 09:50:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:04.935 09:50:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:04.935 09:50:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:04.935 09:50:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:04.935 09:50:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:13:04.935 09:50:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:13:04.935 09:50:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:04.935 09:50:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:13:04.935 09:50:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:13:04.935 09:50:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:04.935 09:50:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:04.935 09:50:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:04.935 09:50:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.935 09:50:58 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.935 09:50:58 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.935 09:50:58 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.935 09:50:58 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.935 09:50:58 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.935 09:50:58 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:13:04.935 09:50:58 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:13:04.935 09:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.935 09:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.935 09:50:58 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:04.935 nvme0n1 00:13:04.935 nvme1n1 00:13:04.935 nvme1n2 00:13:04.935 nvme1n3 00:13:04.935 nvme2n1 00:13:04.935 nvme3n1 00:13:04.935 09:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:13:04.935 09:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.935 09:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.935 09:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@738 -- # cat 00:13:04.935 09:50:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:13:04.935 09:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.935 09:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.935 09:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.935 09:50:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:13:04.935 09:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.936 09:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.936 09:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.936 09:50:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:04.936 09:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.936 09:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.936 09:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.936 09:50:58 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:13:04.936 09:50:58 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:13:04.936 09:50:58 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:13:04.936 09:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.936 09:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.936 09:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.936 09:50:58 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:13:04.936 09:50:58 -- bdev/blockdev.sh@747 -- # jq -r .name 00:13:04.936 09:50:58 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "81154bf9-6ba7-4d81-9613-391116387ff3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "81154bf9-6ba7-4d81-9613-391116387ff3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d7931ee8-3ca0-4b86-98e5-2796522d920f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d7931ee8-3ca0-4b86-98e5-2796522d920f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "8b5c9705-778d-48ae-aa52-f49fdfc61720"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8b5c9705-778d-48ae-aa52-f49fdfc61720",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "e0a19dbb-e794-4361-8fb1-f9f3b8cce261"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e0a19dbb-e794-4361-8fb1-f9f3b8cce261",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "cad0143d-cc30-4dca-ad11-0235d63dfc25"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cad0143d-cc30-4dca-ad11-0235d63dfc25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d65e4361-296a-494a-a84c-e365982d943a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d65e4361-296a-494a-a84c-e365982d943a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:04.936 09:50:58 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:13:04.936 09:50:58 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:13:04.936 09:50:58 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:13:04.936 09:50:58 -- bdev/blockdev.sh@752 -- # killprocess 68648 00:13:04.936 09:50:58 -- common/autotest_common.sh@926 -- # '[' -z 68648 ']' 00:13:04.936 09:50:58 -- common/autotest_common.sh@930 -- # kill -0 68648 00:13:04.936 09:50:58 -- common/autotest_common.sh@931 -- # uname 00:13:04.936 09:50:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:04.936 09:50:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68648 00:13:04.936 09:50:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:04.936 09:50:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:04.936 killing process with pid 68648 00:13:04.936 09:50:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68648' 00:13:04.936 09:50:58 -- common/autotest_common.sh@945 -- # kill 68648 00:13:04.936 09:50:58 -- common/autotest_common.sh@950 -- # wait 68648 00:13:07.466 09:51:00 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:07.466 09:51:00 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:07.466 09:51:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:07.466 09:51:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:07.466 09:51:00 -- common/autotest_common.sh@10 -- # set +x 00:13:07.466 ************************************ 00:13:07.466 START TEST bdev_hello_world 00:13:07.466 ************************************ 00:13:07.466 09:51:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:07.466 [2024-06-10 09:51:00.820798] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:07.466 [2024-06-10 09:51:00.821019] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69044 ] 00:13:07.466 [2024-06-10 09:51:00.996994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.466 [2024-06-10 09:51:01.188306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.032 [2024-06-10 09:51:01.589009] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:08.032 [2024-06-10 09:51:01.589077] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:08.032 [2024-06-10 09:51:01.589130] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:08.032 [2024-06-10 09:51:01.591459] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:08.032 [2024-06-10 09:51:01.591862] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:08.032 [2024-06-10 09:51:01.591904] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:08.032 [2024-06-10 09:51:01.592102] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:08.032 00:13:08.032 [2024-06-10 09:51:01.592150] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:08.965 00:13:08.965 real 0m1.978s 00:13:08.965 user 0m1.655s 00:13:08.965 sys 0m0.206s 00:13:08.965 ************************************ 00:13:08.965 END TEST bdev_hello_world 00:13:08.965 ************************************ 00:13:08.965 09:51:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.965 09:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:09.223 09:51:02 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:13:09.223 09:51:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:09.223 09:51:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.223 09:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:09.223 ************************************ 00:13:09.223 START TEST bdev_bounds 00:13:09.223 ************************************ 00:13:09.223 09:51:02 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:13:09.223 Process bdevio pid: 69086 00:13:09.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.223 09:51:02 -- bdev/blockdev.sh@288 -- # bdevio_pid=69086 00:13:09.223 09:51:02 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:09.223 09:51:02 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 69086' 00:13:09.223 09:51:02 -- bdev/blockdev.sh@291 -- # waitforlisten 69086 00:13:09.223 09:51:02 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:09.223 09:51:02 -- common/autotest_common.sh@819 -- # '[' -z 69086 ']' 00:13:09.223 09:51:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.223 09:51:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:09.223 09:51:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.223 09:51:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:09.223 09:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:09.223 [2024-06-10 09:51:02.863667] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:09.223 [2024-06-10 09:51:02.863864] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69086 ] 00:13:09.481 [2024-06-10 09:51:03.040179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:09.739 [2024-06-10 09:51:03.279972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.739 [2024-06-10 09:51:03.280127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.739 [2024-06-10 09:51:03.280164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.309 09:51:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:10.309 09:51:03 -- common/autotest_common.sh@852 -- # return 0 00:13:10.309 09:51:03 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:10.309 I/O targets: 00:13:10.309 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:10.309 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:10.309 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:10.309 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:10.309 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:10.309 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:10.309 00:13:10.309 00:13:10.309 CUnit - A unit testing framework for C - Version 2.1-3 00:13:10.309 http://cunit.sourceforge.net/ 00:13:10.309 00:13:10.309 00:13:10.309 Suite: bdevio tests on: nvme3n1 00:13:10.309 Test: blockdev write read block ...passed 00:13:10.309 Test: blockdev write zeroes read block ...passed 00:13:10.309 Test: blockdev write zeroes read no split ...passed 00:13:10.309 Test: blockdev write zeroes read split ...passed 00:13:10.309 Test: blockdev write zeroes read split partial ...passed 00:13:10.309 Test: blockdev reset ...passed 00:13:10.309 Test: blockdev write read 8 blocks ...passed 00:13:10.309 Test: blockdev write read size > 128k ...passed 00:13:10.309 Test: blockdev write read invalid size ...passed 00:13:10.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:10.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:10.309 Test: blockdev write read max offset ...passed 00:13:10.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:10.309 Test: blockdev writev readv 8 blocks ...passed 00:13:10.309 Test: blockdev writev readv 30 x 1block ...passed 00:13:10.309 Test: blockdev writev readv block ...passed 00:13:10.309 Test: blockdev writev readv size > 128k ...passed 00:13:10.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:10.309 Test: blockdev comparev and writev ...passed 00:13:10.309 Test: blockdev nvme passthru rw ...passed 00:13:10.309 Test: blockdev nvme passthru vendor specific ...passed 00:13:10.309 Test: blockdev nvme admin passthru ...passed 00:13:10.309 Test: blockdev copy ...passed 00:13:10.309 Suite: bdevio tests on: nvme2n1 00:13:10.309 Test: blockdev write read block ...passed 00:13:10.310 Test: blockdev write zeroes read block ...passed 00:13:10.310 Test: blockdev write zeroes read no split ...passed 00:13:10.310 Test: blockdev write zeroes read split ...passed 00:13:10.310 Test: blockdev write zeroes read split partial ...passed 00:13:10.310 Test: blockdev reset ...passed 00:13:10.310 Test: blockdev write read 8 blocks ...passed 00:13:10.310 Test: blockdev write read size > 128k ...passed 00:13:10.310 Test: blockdev write read invalid size ...passed 00:13:10.310 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:10.310 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:10.310 Test: blockdev write read max offset ...passed 00:13:10.310 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:10.310 Test: blockdev writev readv 8 blocks ...passed 00:13:10.310 Test: blockdev writev readv 30 x 1block ...passed 00:13:10.310 Test: blockdev writev readv block ...passed 00:13:10.310 Test: blockdev writev readv size > 128k ...passed 00:13:10.310 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:10.310 Test: blockdev comparev and writev ...passed 00:13:10.310 Test: blockdev nvme passthru rw ...passed 00:13:10.310 Test: blockdev nvme passthru vendor specific ...passed 00:13:10.310 Test: blockdev nvme admin passthru ...passed 00:13:10.310 Test: blockdev copy ...passed 00:13:10.310 Suite: bdevio tests on: nvme1n3 00:13:10.310 Test: blockdev write read block ...passed 00:13:10.310 Test: blockdev write zeroes read block ...passed 00:13:10.310 Test: blockdev write zeroes read no split ...passed 00:13:10.569 Test: blockdev write zeroes read split ...passed 00:13:10.569 Test: blockdev write zeroes read split partial ...passed 00:13:10.569 Test: blockdev reset ...passed 00:13:10.569 Test: blockdev write read 8 blocks ...passed 00:13:10.569 Test: blockdev write read size > 128k ...passed 00:13:10.569 Test: blockdev write read invalid size ...passed 00:13:10.569 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:10.569 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:10.569 Test: blockdev write read max offset ...passed 00:13:10.569 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:10.569 Test: blockdev writev readv 8 blocks ...passed 00:13:10.569 Test: blockdev writev readv 30 x 1block ...passed 00:13:10.569 Test: blockdev writev readv block ...passed 00:13:10.569 Test: blockdev writev readv size > 128k ...passed 00:13:10.569 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:10.569 Test: blockdev comparev and writev ...passed 00:13:10.569 Test: blockdev nvme passthru rw ...passed 00:13:10.569 Test: blockdev nvme passthru vendor specific ...passed 00:13:10.569 Test: blockdev nvme admin passthru ...passed 00:13:10.569 Test: blockdev copy ...passed 00:13:10.569 Suite: bdevio tests on: nvme1n2 00:13:10.569 Test: blockdev write read block ...passed 00:13:10.569 Test: blockdev write zeroes read block ...passed 00:13:10.569 Test: blockdev write zeroes read no split ...passed 00:13:10.569 Test: blockdev write zeroes read split ...passed 00:13:10.569 Test: blockdev write zeroes read split partial ...passed 00:13:10.569 Test: blockdev reset ...passed 00:13:10.569 Test: blockdev write read 8 blocks ...passed 00:13:10.569 Test: blockdev write read size > 128k ...passed 00:13:10.569 Test: blockdev write read invalid size ...passed 00:13:10.569 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:10.569 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:10.569 Test: blockdev write read max offset ...passed 00:13:10.569 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:10.569 Test: blockdev writev readv 8 blocks ...passed 00:13:10.569 Test: blockdev writev readv 30 x 1block ...passed 00:13:10.569 Test: blockdev writev readv block ...passed 00:13:10.569 Test: blockdev writev readv size > 128k ...passed 00:13:10.569 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:10.569 Test: blockdev comparev and writev ...passed 00:13:10.569 Test: blockdev nvme passthru rw ...passed 00:13:10.569 Test: blockdev nvme passthru vendor specific ...passed 00:13:10.569 Test: blockdev nvme admin passthru ...passed 00:13:10.569 Test: blockdev copy ...passed 00:13:10.569 Suite: bdevio tests on: nvme1n1 00:13:10.569 Test: blockdev write read block ...passed 00:13:10.569 Test: blockdev write zeroes read block ...passed 00:13:10.569 Test: blockdev write zeroes read no split ...passed 00:13:10.569 Test: blockdev write zeroes read split ...passed 00:13:10.569 Test: blockdev write zeroes read split partial ...passed 00:13:10.569 Test: blockdev reset ...passed 00:13:10.569 Test: blockdev write read 8 blocks ...passed 00:13:10.569 Test: blockdev write read size > 128k ...passed 00:13:10.569 Test: blockdev write read invalid size ...passed 00:13:10.569 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:10.569 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:10.569 Test: blockdev write read max offset ...passed 00:13:10.569 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:10.569 Test: blockdev writev readv 8 blocks ...passed 00:13:10.569 Test: blockdev writev readv 30 x 1block ...passed 00:13:10.569 Test: blockdev writev readv block ...passed 00:13:10.569 Test: blockdev writev readv size > 128k ...passed 00:13:10.569 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:10.569 Test: blockdev comparev and writev ...passed 00:13:10.569 Test: blockdev nvme passthru rw ...passed 00:13:10.569 Test: blockdev nvme passthru vendor specific ...passed 00:13:10.569 Test: blockdev nvme admin passthru ...passed 00:13:10.569 Test: blockdev copy ...passed 00:13:10.569 Suite: bdevio tests on: nvme0n1 00:13:10.569 Test: blockdev write read block ...passed 00:13:10.569 Test: blockdev write zeroes read block ...passed 00:13:10.569 Test: blockdev write zeroes read no split ...passed 00:13:10.569 Test: blockdev write zeroes read split ...passed 00:13:10.569 Test: blockdev write zeroes read split partial ...passed 00:13:10.569 Test: blockdev reset ...passed 00:13:10.569 Test: blockdev write read 8 blocks ...passed 00:13:10.569 Test: blockdev write read size > 128k ...passed 00:13:10.569 Test: blockdev write read invalid size ...passed 00:13:10.569 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:10.569 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:10.569 Test: blockdev write read max offset ...passed 00:13:10.569 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:10.569 Test: blockdev writev readv 8 blocks ...passed 00:13:10.569 Test: blockdev writev readv 30 x 1block ...passed 00:13:10.569 Test: blockdev writev readv block ...passed 00:13:10.569 Test: blockdev writev readv size > 128k ...passed 00:13:10.569 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:10.569 Test: blockdev comparev and writev ...passed 00:13:10.569 Test: blockdev nvme passthru rw ...passed 00:13:10.569 Test: blockdev nvme passthru vendor specific ...passed 00:13:10.569 Test: blockdev nvme admin passthru ...passed 00:13:10.569 Test: blockdev copy ...passed 00:13:10.569 00:13:10.569 Run Summary: Type Total Ran Passed Failed Inactive 00:13:10.569 suites 6 6 n/a 0 0 00:13:10.569 tests 138 138 138 0 0 00:13:10.569 asserts 780 780 780 0 n/a 00:13:10.569 00:13:10.569 Elapsed time = 1.126 seconds 00:13:10.569 0 00:13:10.829 09:51:04 -- bdev/blockdev.sh@293 -- # killprocess 69086 00:13:10.829 09:51:04 -- common/autotest_common.sh@926 -- # '[' -z 69086 ']' 00:13:10.829 09:51:04 -- common/autotest_common.sh@930 -- # kill -0 69086 00:13:10.829 09:51:04 -- common/autotest_common.sh@931 -- # uname 00:13:10.829 09:51:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:10.830 09:51:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69086 00:13:10.830 09:51:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:10.830 09:51:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:10.830 09:51:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69086' 00:13:10.830 killing process with pid 69086 00:13:10.830 09:51:04 -- common/autotest_common.sh@945 -- # kill 69086 00:13:10.830 09:51:04 -- common/autotest_common.sh@950 -- # wait 69086 00:13:11.763 09:51:05 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:13:11.763 00:13:11.763 real 0m2.722s 00:13:11.763 user 0m6.384s 00:13:11.763 sys 0m0.359s 00:13:11.763 09:51:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.763 09:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:11.763 ************************************ 00:13:11.763 END TEST bdev_bounds 00:13:11.763 ************************************ 00:13:11.763 09:51:05 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:13:11.763 09:51:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:11.763 09:51:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:11.763 09:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:11.763 ************************************ 00:13:11.763 START TEST bdev_nbd 00:13:11.763 ************************************ 00:13:11.763 09:51:05 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:13:11.763 09:51:05 -- bdev/blockdev.sh@298 -- # uname -s 00:13:12.021 09:51:05 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:13:12.021 09:51:05 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.021 09:51:05 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:12.021 09:51:05 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:12.021 09:51:05 -- bdev/blockdev.sh@302 -- # local bdev_all 00:13:12.021 09:51:05 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:13:12.021 09:51:05 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:13:12.021 09:51:05 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:12.021 09:51:05 -- bdev/blockdev.sh@309 -- # local nbd_all 00:13:12.021 09:51:05 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:13:12.021 09:51:05 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.021 09:51:05 -- bdev/blockdev.sh@312 -- # local nbd_list 00:13:12.021 09:51:05 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:12.021 09:51:05 -- bdev/blockdev.sh@313 -- # local bdev_list 00:13:12.021 09:51:05 -- bdev/blockdev.sh@316 -- # nbd_pid=69142 00:13:12.021 09:51:05 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:12.021 09:51:05 -- bdev/blockdev.sh@318 -- # waitforlisten 69142 /var/tmp/spdk-nbd.sock 00:13:12.021 09:51:05 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:12.021 09:51:05 -- common/autotest_common.sh@819 -- # '[' -z 69142 ']' 00:13:12.021 09:51:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:12.021 09:51:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:12.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:12.021 09:51:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:12.021 09:51:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:12.021 09:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:12.021 [2024-06-10 09:51:05.614958] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:12.021 [2024-06-10 09:51:05.615092] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.021 [2024-06-10 09:51:05.777748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.279 [2024-06-10 09:51:05.968881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.849 09:51:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:12.849 09:51:06 -- common/autotest_common.sh@852 -- # return 0 00:13:12.849 09:51:06 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@24 -- # local i 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:12.849 09:51:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:13.116 09:51:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:13.116 09:51:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:13.116 09:51:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:13.116 09:51:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:13.116 09:51:06 -- common/autotest_common.sh@857 -- # local i 00:13:13.116 09:51:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:13.116 09:51:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:13.116 09:51:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:13.116 09:51:06 -- common/autotest_common.sh@861 -- # break 00:13:13.116 09:51:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:13.116 09:51:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:13.116 09:51:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.116 1+0 records in 00:13:13.116 1+0 records out 00:13:13.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492611 s, 8.3 MB/s 00:13:13.116 09:51:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.116 09:51:06 -- common/autotest_common.sh@874 -- # size=4096 00:13:13.116 09:51:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.116 09:51:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:13.116 09:51:06 -- common/autotest_common.sh@877 -- # return 0 00:13:13.116 09:51:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:13.116 09:51:06 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:13.116 09:51:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:13.375 09:51:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:13.375 09:51:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:13.375 09:51:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:13.375 09:51:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:13:13.375 09:51:07 -- common/autotest_common.sh@857 -- # local i 00:13:13.375 09:51:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:13.375 09:51:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:13.375 09:51:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:13:13.375 09:51:07 -- common/autotest_common.sh@861 -- # break 00:13:13.375 09:51:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:13.375 09:51:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:13.375 09:51:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.375 1+0 records in 00:13:13.375 1+0 records out 00:13:13.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483637 s, 8.5 MB/s 00:13:13.375 09:51:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.375 09:51:07 -- common/autotest_common.sh@874 -- # size=4096 00:13:13.375 09:51:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.375 09:51:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:13.375 09:51:07 -- common/autotest_common.sh@877 -- # return 0 00:13:13.375 09:51:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:13.375 09:51:07 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:13.375 09:51:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:13:13.636 09:51:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:13.636 09:51:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:13.636 09:51:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:13.636 09:51:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:13:13.636 09:51:07 -- common/autotest_common.sh@857 -- # local i 00:13:13.636 09:51:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:13.636 09:51:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:13.636 09:51:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:13:13.636 09:51:07 -- common/autotest_common.sh@861 -- # break 00:13:13.636 09:51:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:13.636 09:51:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:13.636 09:51:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.636 1+0 records in 00:13:13.636 1+0 records out 00:13:13.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638671 s, 6.4 MB/s 00:13:13.895 09:51:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.895 09:51:07 -- common/autotest_common.sh@874 -- # size=4096 00:13:13.895 09:51:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.895 09:51:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:13.895 09:51:07 -- common/autotest_common.sh@877 -- # return 0 00:13:13.895 09:51:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:13.895 09:51:07 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:13.895 09:51:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:13:14.154 09:51:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:14.154 09:51:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:14.154 09:51:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:14.154 09:51:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:13:14.154 09:51:07 -- common/autotest_common.sh@857 -- # local i 00:13:14.154 09:51:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:14.154 09:51:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:14.154 09:51:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:13:14.154 09:51:07 -- common/autotest_common.sh@861 -- # break 00:13:14.154 09:51:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:14.154 09:51:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:14.154 09:51:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.154 1+0 records in 00:13:14.154 1+0 records out 00:13:14.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000889765 s, 4.6 MB/s 00:13:14.154 09:51:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.154 09:51:07 -- common/autotest_common.sh@874 -- # size=4096 00:13:14.154 09:51:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.154 09:51:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:14.154 09:51:07 -- common/autotest_common.sh@877 -- # return 0 00:13:14.154 09:51:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:14.154 09:51:07 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:14.154 09:51:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:14.413 09:51:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:14.413 09:51:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:14.413 09:51:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:14.413 09:51:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:13:14.413 09:51:07 -- common/autotest_common.sh@857 -- # local i 00:13:14.413 09:51:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:14.413 09:51:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:14.413 09:51:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:13:14.413 09:51:07 -- common/autotest_common.sh@861 -- # break 00:13:14.413 09:51:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:14.413 09:51:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:14.413 09:51:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.413 1+0 records in 00:13:14.413 1+0 records out 00:13:14.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000912444 s, 4.5 MB/s 00:13:14.413 09:51:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.413 09:51:08 -- common/autotest_common.sh@874 -- # size=4096 00:13:14.413 09:51:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.413 09:51:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:14.413 09:51:08 -- common/autotest_common.sh@877 -- # return 0 00:13:14.413 09:51:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:14.413 09:51:08 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:14.413 09:51:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:14.671 09:51:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:14.671 09:51:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:14.671 09:51:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:14.671 09:51:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:13:14.671 09:51:08 -- common/autotest_common.sh@857 -- # local i 00:13:14.671 09:51:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:14.671 09:51:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:14.671 09:51:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:13:14.671 09:51:08 -- common/autotest_common.sh@861 -- # break 00:13:14.671 09:51:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:14.671 09:51:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:14.671 09:51:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.671 1+0 records in 00:13:14.671 1+0 records out 00:13:14.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635913 s, 6.4 MB/s 00:13:14.671 09:51:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.671 09:51:08 -- common/autotest_common.sh@874 -- # size=4096 00:13:14.671 09:51:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.671 09:51:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:14.671 09:51:08 -- common/autotest_common.sh@877 -- # return 0 00:13:14.671 09:51:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:14.671 09:51:08 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:14.671 09:51:08 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:14.929 09:51:08 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd0", 00:13:14.929 "bdev_name": "nvme0n1" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd1", 00:13:14.929 "bdev_name": "nvme1n1" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd2", 00:13:14.929 "bdev_name": "nvme1n2" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd3", 00:13:14.929 "bdev_name": "nvme1n3" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd4", 00:13:14.929 "bdev_name": "nvme2n1" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd5", 00:13:14.929 "bdev_name": "nvme3n1" 00:13:14.929 } 00:13:14.929 ]' 00:13:14.929 09:51:08 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:14.929 09:51:08 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:14.929 09:51:08 -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd0", 00:13:14.929 "bdev_name": "nvme0n1" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd1", 00:13:14.929 "bdev_name": "nvme1n1" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd2", 00:13:14.929 "bdev_name": "nvme1n2" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd3", 00:13:14.929 "bdev_name": "nvme1n3" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd4", 00:13:14.929 "bdev_name": "nvme2n1" 00:13:14.929 }, 00:13:14.929 { 00:13:14.929 "nbd_device": "/dev/nbd5", 00:13:14.929 "bdev_name": "nvme3n1" 00:13:14.929 } 00:13:14.929 ]' 00:13:14.930 09:51:08 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:14.930 09:51:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.930 09:51:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:14.930 09:51:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:14.930 09:51:08 -- bdev/nbd_common.sh@51 -- # local i 00:13:14.930 09:51:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.930 09:51:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@41 -- # break 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.208 09:51:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@41 -- # break 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.470 09:51:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@41 -- # break 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@41 -- # break 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.037 09:51:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:16.602 09:51:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:16.602 09:51:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:16.602 09:51:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:16.602 09:51:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.602 09:51:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.602 09:51:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:16.602 09:51:10 -- bdev/nbd_common.sh@41 -- # break 00:13:16.602 09:51:10 -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.602 09:51:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@41 -- # break 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.603 09:51:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:16.861 09:51:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:16.861 09:51:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:16.861 09:51:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@65 -- # true 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@65 -- # count=0 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@122 -- # count=0 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@127 -- # return 0 00:13:17.119 09:51:10 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@12 -- # local i 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:17.119 /dev/nbd0 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.119 09:51:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.119 09:51:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:17.119 09:51:10 -- common/autotest_common.sh@857 -- # local i 00:13:17.119 09:51:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:17.119 09:51:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:17.119 09:51:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:17.377 09:51:10 -- common/autotest_common.sh@861 -- # break 00:13:17.377 09:51:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:17.377 09:51:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:17.377 09:51:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.377 1+0 records in 00:13:17.377 1+0 records out 00:13:17.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482688 s, 8.5 MB/s 00:13:17.377 09:51:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.377 09:51:10 -- common/autotest_common.sh@874 -- # size=4096 00:13:17.377 09:51:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.377 09:51:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:17.377 09:51:10 -- common/autotest_common.sh@877 -- # return 0 00:13:17.377 09:51:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.377 09:51:10 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:17.377 09:51:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:17.636 /dev/nbd1 00:13:17.636 09:51:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:17.636 09:51:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:17.636 09:51:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:13:17.636 09:51:11 -- common/autotest_common.sh@857 -- # local i 00:13:17.636 09:51:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:17.636 09:51:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:17.636 09:51:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:13:17.636 09:51:11 -- common/autotest_common.sh@861 -- # break 00:13:17.636 09:51:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:17.636 09:51:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:17.636 09:51:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.636 1+0 records in 00:13:17.636 1+0 records out 00:13:17.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548223 s, 7.5 MB/s 00:13:17.636 09:51:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.636 09:51:11 -- common/autotest_common.sh@874 -- # size=4096 00:13:17.636 09:51:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.636 09:51:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:17.636 09:51:11 -- common/autotest_common.sh@877 -- # return 0 00:13:17.636 09:51:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.636 09:51:11 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:17.636 09:51:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:13:17.895 /dev/nbd10 00:13:17.895 09:51:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:17.895 09:51:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:17.895 09:51:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:13:17.895 09:51:11 -- common/autotest_common.sh@857 -- # local i 00:13:17.895 09:51:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:17.895 09:51:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:17.895 09:51:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:13:17.895 09:51:11 -- common/autotest_common.sh@861 -- # break 00:13:17.895 09:51:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:17.895 09:51:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:17.895 09:51:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.895 1+0 records in 00:13:17.895 1+0 records out 00:13:17.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585041 s, 7.0 MB/s 00:13:17.895 09:51:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.895 09:51:11 -- common/autotest_common.sh@874 -- # size=4096 00:13:17.895 09:51:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.895 09:51:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:17.895 09:51:11 -- common/autotest_common.sh@877 -- # return 0 00:13:17.895 09:51:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.895 09:51:11 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:17.895 09:51:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:13:18.153 /dev/nbd11 00:13:18.153 09:51:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:18.153 09:51:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:18.153 09:51:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:13:18.153 09:51:11 -- common/autotest_common.sh@857 -- # local i 00:13:18.153 09:51:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:18.153 09:51:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:18.153 09:51:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:13:18.153 09:51:11 -- common/autotest_common.sh@861 -- # break 00:13:18.153 09:51:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:18.153 09:51:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:18.153 09:51:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.153 1+0 records in 00:13:18.153 1+0 records out 00:13:18.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512622 s, 8.0 MB/s 00:13:18.153 09:51:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.153 09:51:11 -- common/autotest_common.sh@874 -- # size=4096 00:13:18.153 09:51:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.153 09:51:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:18.154 09:51:11 -- common/autotest_common.sh@877 -- # return 0 00:13:18.154 09:51:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.154 09:51:11 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:18.154 09:51:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:13:18.421 /dev/nbd12 00:13:18.421 09:51:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:18.421 09:51:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:18.421 09:51:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:13:18.421 09:51:12 -- common/autotest_common.sh@857 -- # local i 00:13:18.421 09:51:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:18.421 09:51:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:18.421 09:51:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:13:18.421 09:51:12 -- common/autotest_common.sh@861 -- # break 00:13:18.421 09:51:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:18.421 09:51:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:18.421 09:51:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.421 1+0 records in 00:13:18.421 1+0 records out 00:13:18.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811291 s, 5.0 MB/s 00:13:18.421 09:51:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.421 09:51:12 -- common/autotest_common.sh@874 -- # size=4096 00:13:18.421 09:51:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.421 09:51:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:18.421 09:51:12 -- common/autotest_common.sh@877 -- # return 0 00:13:18.421 09:51:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.421 09:51:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:18.421 09:51:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:18.690 /dev/nbd13 00:13:18.690 09:51:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:18.690 09:51:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:18.690 09:51:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:13:18.690 09:51:12 -- common/autotest_common.sh@857 -- # local i 00:13:18.690 09:51:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:18.690 09:51:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:18.690 09:51:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:13:18.690 09:51:12 -- common/autotest_common.sh@861 -- # break 00:13:18.690 09:51:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:18.690 09:51:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:18.690 09:51:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.690 1+0 records in 00:13:18.690 1+0 records out 00:13:18.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539338 s, 7.6 MB/s 00:13:18.690 09:51:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.690 09:51:12 -- common/autotest_common.sh@874 -- # size=4096 00:13:18.690 09:51:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.690 09:51:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:18.690 09:51:12 -- common/autotest_common.sh@877 -- # return 0 00:13:18.690 09:51:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.690 09:51:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:18.690 09:51:12 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:18.690 09:51:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:18.690 09:51:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:18.948 09:51:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd0", 00:13:18.948 "bdev_name": "nvme0n1" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd1", 00:13:18.948 "bdev_name": "nvme1n1" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd10", 00:13:18.948 "bdev_name": "nvme1n2" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd11", 00:13:18.948 "bdev_name": "nvme1n3" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd12", 00:13:18.948 "bdev_name": "nvme2n1" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd13", 00:13:18.948 "bdev_name": "nvme3n1" 00:13:18.948 } 00:13:18.948 ]' 00:13:18.948 09:51:12 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd0", 00:13:18.948 "bdev_name": "nvme0n1" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd1", 00:13:18.948 "bdev_name": "nvme1n1" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd10", 00:13:18.948 "bdev_name": "nvme1n2" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd11", 00:13:18.948 "bdev_name": "nvme1n3" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd12", 00:13:18.948 "bdev_name": "nvme2n1" 00:13:18.948 }, 00:13:18.948 { 00:13:18.948 "nbd_device": "/dev/nbd13", 00:13:18.948 "bdev_name": "nvme3n1" 00:13:18.948 } 00:13:18.948 ]' 00:13:18.948 09:51:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:19.206 /dev/nbd1 00:13:19.206 /dev/nbd10 00:13:19.206 /dev/nbd11 00:13:19.206 /dev/nbd12 00:13:19.206 /dev/nbd13' 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:19.206 /dev/nbd1 00:13:19.206 /dev/nbd10 00:13:19.206 /dev/nbd11 00:13:19.206 /dev/nbd12 00:13:19.206 /dev/nbd13' 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@65 -- # count=6 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@66 -- # echo 6 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@95 -- # count=6 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:19.206 256+0 records in 00:13:19.206 256+0 records out 00:13:19.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721189 s, 145 MB/s 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:19.206 256+0 records in 00:13:19.206 256+0 records out 00:13:19.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134235 s, 7.8 MB/s 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:19.206 09:51:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:19.465 256+0 records in 00:13:19.465 256+0 records out 00:13:19.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147724 s, 7.1 MB/s 00:13:19.465 09:51:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:19.465 09:51:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:19.465 256+0 records in 00:13:19.465 256+0 records out 00:13:19.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141567 s, 7.4 MB/s 00:13:19.465 09:51:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:19.465 09:51:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:19.723 256+0 records in 00:13:19.723 256+0 records out 00:13:19.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127372 s, 8.2 MB/s 00:13:19.723 09:51:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:19.723 09:51:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:19.723 256+0 records in 00:13:19.723 256+0 records out 00:13:19.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166661 s, 6.3 MB/s 00:13:19.723 09:51:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:19.723 09:51:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:19.982 256+0 records in 00:13:19.982 256+0 records out 00:13:19.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138894 s, 7.5 MB/s 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@51 -- # local i 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.982 09:51:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@41 -- # break 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.241 09:51:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@41 -- # break 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.499 09:51:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@41 -- # break 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.758 09:51:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@41 -- # break 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.325 09:51:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:21.325 09:51:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:21.583 09:51:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:21.583 09:51:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:21.583 09:51:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.583 09:51:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.583 09:51:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:21.583 09:51:15 -- bdev/nbd_common.sh@41 -- # break 00:13:21.583 09:51:15 -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.583 09:51:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.583 09:51:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@41 -- # break 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.842 09:51:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:21.843 09:51:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:21.843 09:51:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:21.843 09:51:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@65 -- # true 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@65 -- # count=0 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@104 -- # count=0 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@109 -- # return 0 00:13:22.101 09:51:15 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:22.101 09:51:15 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:22.359 malloc_lvol_verify 00:13:22.359 09:51:15 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:22.618 7c5af8d3-96df-4e05-b3af-42c0800a66ae 00:13:22.618 09:51:16 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:22.876 b0bb90c6-b205-4bb8-8a86-1f03c01c1ec4 00:13:22.876 09:51:16 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:23.134 /dev/nbd0 00:13:23.134 09:51:16 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:23.134 mke2fs 1.46.5 (30-Dec-2021) 00:13:23.134 Discarding device blocks: 0/4096 done 00:13:23.134 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:23.134 00:13:23.134 Allocating group tables: 0/1 done 00:13:23.134 Writing inode tables: 0/1 done 00:13:23.134 Creating journal (1024 blocks): done 00:13:23.134 Writing superblocks and filesystem accounting information: 0/1 done 00:13:23.134 00:13:23.134 09:51:16 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:23.134 09:51:16 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:23.134 09:51:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.134 09:51:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:23.134 09:51:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.134 09:51:16 -- bdev/nbd_common.sh@51 -- # local i 00:13:23.134 09:51:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.134 09:51:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:23.392 09:51:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.392 09:51:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.392 09:51:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.392 09:51:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.392 09:51:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.392 09:51:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.392 09:51:17 -- bdev/nbd_common.sh@41 -- # break 00:13:23.392 09:51:17 -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.392 09:51:17 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:23.393 09:51:17 -- bdev/nbd_common.sh@147 -- # return 0 00:13:23.393 09:51:17 -- bdev/blockdev.sh@324 -- # killprocess 69142 00:13:23.393 09:51:17 -- common/autotest_common.sh@926 -- # '[' -z 69142 ']' 00:13:23.393 09:51:17 -- common/autotest_common.sh@930 -- # kill -0 69142 00:13:23.393 09:51:17 -- common/autotest_common.sh@931 -- # uname 00:13:23.393 09:51:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:23.393 09:51:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69142 00:13:23.393 killing process with pid 69142 00:13:23.393 09:51:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:23.393 09:51:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:23.393 09:51:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69142' 00:13:23.393 09:51:17 -- common/autotest_common.sh@945 -- # kill 69142 00:13:23.393 09:51:17 -- common/autotest_common.sh@950 -- # wait 69142 00:13:24.768 ************************************ 00:13:24.768 END TEST bdev_nbd 00:13:24.768 ************************************ 00:13:24.768 09:51:18 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:24.768 00:13:24.768 real 0m12.691s 00:13:24.768 user 0m17.988s 00:13:24.768 sys 0m4.229s 00:13:24.768 09:51:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.768 09:51:18 -- common/autotest_common.sh@10 -- # set +x 00:13:24.768 09:51:18 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:24.768 09:51:18 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:24.768 09:51:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:24.768 09:51:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:24.768 09:51:18 -- common/autotest_common.sh@10 -- # set +x 00:13:24.768 ************************************ 00:13:24.768 START TEST bdev_fio 00:13:24.768 ************************************ 00:13:24.768 09:51:18 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@329 -- # local env_context 00:13:24.768 09:51:18 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:24.768 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:24.768 09:51:18 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:24.768 09:51:18 -- bdev/blockdev.sh@337 -- # echo '' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:24.768 09:51:18 -- bdev/blockdev.sh@337 -- # env_context= 00:13:24.768 09:51:18 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:24.768 09:51:18 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.768 09:51:18 -- common/autotest_common.sh@1260 -- # local workload=verify 00:13:24.768 09:51:18 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:13:24.768 09:51:18 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:24.768 09:51:18 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:24.768 09:51:18 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:24.768 09:51:18 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:13:24.768 09:51:18 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:24.768 09:51:18 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.768 09:51:18 -- common/autotest_common.sh@1280 -- # cat 00:13:24.768 09:51:18 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:13:24.768 09:51:18 -- common/autotest_common.sh@1293 -- # cat 00:13:24.768 09:51:18 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:13:24.768 09:51:18 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:13:24.768 09:51:18 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:24.768 09:51:18 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:13:24.768 09:51:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:24.768 09:51:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:13:24.768 09:51:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:24.768 09:51:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:13:24.768 09:51:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:24.768 09:51:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:13:24.768 09:51:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:24.768 09:51:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:13:24.768 09:51:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:24.768 09:51:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:13:24.768 09:51:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:24.768 09:51:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:13:24.768 09:51:18 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:24.768 09:51:18 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:24.768 09:51:18 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:24.769 09:51:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:24.769 09:51:18 -- common/autotest_common.sh@10 -- # set +x 00:13:24.769 ************************************ 00:13:24.769 START TEST bdev_fio_rw_verify 00:13:24.769 ************************************ 00:13:24.769 09:51:18 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:24.769 09:51:18 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:24.769 09:51:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:24.769 09:51:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:24.769 09:51:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:24.769 09:51:18 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:24.769 09:51:18 -- common/autotest_common.sh@1320 -- # shift 00:13:24.769 09:51:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:24.769 09:51:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:24.769 09:51:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:24.769 09:51:18 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:24.769 09:51:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:24.769 09:51:18 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:24.769 09:51:18 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:24.769 09:51:18 -- common/autotest_common.sh@1326 -- # break 00:13:24.769 09:51:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:24.769 09:51:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:24.769 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:24.769 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:24.769 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:24.769 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:24.769 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:24.769 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:24.769 fio-3.35 00:13:24.769 Starting 6 threads 00:13:36.969 00:13:36.969 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69569: Mon Jun 10 09:51:29 2024 00:13:36.969 read: IOPS=22.1k, BW=86.4MiB/s (90.6MB/s)(865MiB/10001msec) 00:13:36.969 slat (usec): min=3, max=1373, avg= 6.87, stdev= 5.75 00:13:36.969 clat (usec): min=135, max=19849, avg=858.14, stdev=800.23 00:13:36.969 lat (usec): min=139, max=19859, avg=865.01, stdev=800.47 00:13:36.969 clat percentiles (usec): 00:13:36.969 | 50.000th=[ 758], 99.000th=[ 4686], 99.900th=[ 8848], 99.990th=[19268], 00:13:36.969 | 99.999th=[19792] 00:13:36.969 write: IOPS=22.5k, BW=87.9MiB/s (92.1MB/s)(879MiB/10001msec); 0 zone resets 00:13:36.969 slat (usec): min=14, max=14348, avg=30.50, stdev=74.71 00:13:36.969 clat (usec): min=141, max=20107, avg=957.05, stdev=906.41 00:13:36.969 lat (usec): min=163, max=20139, avg=987.55, stdev=910.04 00:13:36.969 clat percentiles (usec): 00:13:36.969 | 50.000th=[ 840], 99.000th=[ 4817], 99.900th=[14222], 99.990th=[19792], 00:13:36.969 | 99.999th=[20055] 00:13:36.969 bw ( KiB/s): min=68608, max=108540, per=100.00%, avg=90754.42, stdev=1794.04, samples=114 00:13:36.969 iops : min=17152, max=27134, avg=22688.47, stdev=448.50, samples=114 00:13:36.969 lat (usec) : 250=1.80%, 500=15.10%, 750=26.71%, 1000=34.24% 00:13:36.969 lat (msec) : 2=18.51%, 4=2.10%, 10=1.45%, 20=0.10%, 50=0.01% 00:13:36.969 cpu : usr=59.34%, sys=27.81%, ctx=6009, majf=0, minf=21766 00:13:36.969 IO depths : 1=12.3%, 2=24.9%, 4=50.1%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:36.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.970 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.970 issued rwts: total=221333,224966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.970 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:36.970 00:13:36.970 Run status group 0 (all jobs): 00:13:36.970 READ: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=865MiB (907MB), run=10001-10001msec 00:13:36.970 WRITE: bw=87.9MiB/s (92.1MB/s), 87.9MiB/s-87.9MiB/s (92.1MB/s-92.1MB/s), io=879MiB (921MB), run=10001-10001msec 00:13:36.970 ----------------------------------------------------- 00:13:36.970 Suppressions used: 00:13:36.970 count bytes template 00:13:36.970 6 48 /usr/src/fio/parse.c 00:13:36.970 3487 334752 /usr/src/fio/iolog.c 00:13:36.970 1 8 libtcmalloc_minimal.so 00:13:36.970 1 904 libcrypto.so 00:13:36.970 ----------------------------------------------------- 00:13:36.970 00:13:36.970 00:13:36.970 real 0m12.251s 00:13:36.970 user 0m37.553s 00:13:36.970 sys 0m17.033s 00:13:36.970 09:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.970 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:13:36.970 ************************************ 00:13:36.970 END TEST bdev_fio_rw_verify 00:13:36.970 ************************************ 00:13:36.970 09:51:30 -- bdev/blockdev.sh@348 -- # rm -f 00:13:36.970 09:51:30 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:36.970 09:51:30 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:36.970 09:51:30 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:36.970 09:51:30 -- common/autotest_common.sh@1260 -- # local workload=trim 00:13:36.970 09:51:30 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:13:36.970 09:51:30 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:36.970 09:51:30 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:36.970 09:51:30 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:36.970 09:51:30 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:13:36.970 09:51:30 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:36.970 09:51:30 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:36.970 09:51:30 -- common/autotest_common.sh@1280 -- # cat 00:13:36.970 09:51:30 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:13:36.970 09:51:30 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:13:36.970 09:51:30 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:13:36.970 09:51:30 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:36.970 09:51:30 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "81154bf9-6ba7-4d81-9613-391116387ff3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "81154bf9-6ba7-4d81-9613-391116387ff3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d7931ee8-3ca0-4b86-98e5-2796522d920f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d7931ee8-3ca0-4b86-98e5-2796522d920f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "8b5c9705-778d-48ae-aa52-f49fdfc61720"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8b5c9705-778d-48ae-aa52-f49fdfc61720",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "e0a19dbb-e794-4361-8fb1-f9f3b8cce261"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e0a19dbb-e794-4361-8fb1-f9f3b8cce261",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "cad0143d-cc30-4dca-ad11-0235d63dfc25"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cad0143d-cc30-4dca-ad11-0235d63dfc25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d65e4361-296a-494a-a84c-e365982d943a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d65e4361-296a-494a-a84c-e365982d943a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:36.970 09:51:30 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:13:36.970 09:51:30 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:36.970 09:51:30 -- bdev/blockdev.sh@360 -- # popd 00:13:36.970 /home/vagrant/spdk_repo/spdk 00:13:36.970 09:51:30 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:13:36.970 09:51:30 -- bdev/blockdev.sh@362 -- # return 0 00:13:36.970 00:13:36.970 real 0m12.409s 00:13:36.970 user 0m37.644s 00:13:36.970 sys 0m17.100s 00:13:36.970 09:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.970 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:13:36.970 ************************************ 00:13:36.970 END TEST bdev_fio 00:13:36.970 ************************************ 00:13:36.970 09:51:30 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:36.970 09:51:30 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:36.970 09:51:30 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:36.970 09:51:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:36.970 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:13:36.970 ************************************ 00:13:36.970 START TEST bdev_verify 00:13:36.970 ************************************ 00:13:36.970 09:51:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:37.228 [2024-06-10 09:51:30.820770] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:37.228 [2024-06-10 09:51:30.820994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69741 ] 00:13:37.228 [2024-06-10 09:51:30.993311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:37.486 [2024-06-10 09:51:31.192974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.486 [2024-06-10 09:51:31.192977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.052 Running I/O for 5 seconds... 00:13:43.342 00:13:43.342 Latency(us) 00:13:43.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.342 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x0 length 0x20000 00:13:43.342 nvme0n1 : 5.05 2639.86 10.31 0.00 0.00 48216.74 13405.09 81502.95 00:13:43.342 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x20000 length 0x20000 00:13:43.342 nvme0n1 : 5.06 2571.23 10.04 0.00 0.00 49636.46 12690.15 93895.21 00:13:43.342 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x0 length 0x80000 00:13:43.342 nvme1n1 : 5.05 2586.81 10.10 0.00 0.00 49180.70 13285.93 78166.57 00:13:43.342 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x80000 length 0x80000 00:13:43.342 nvme1n1 : 5.07 2393.52 9.35 0.00 0.00 53271.00 6494.02 94848.47 00:13:43.342 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x0 length 0x80000 00:13:43.342 nvme1n2 : 5.07 2737.87 10.69 0.00 0.00 46500.31 11736.90 72923.69 00:13:43.342 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x80000 length 0x80000 00:13:43.342 nvme1n2 : 5.07 2424.92 9.47 0.00 0.00 52528.94 15847.80 105810.85 00:13:43.342 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x0 length 0x80000 00:13:43.342 nvme1n3 : 5.06 2646.52 10.34 0.00 0.00 48111.99 13643.40 82932.83 00:13:43.342 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x80000 length 0x80000 00:13:43.342 nvme1n3 : 5.07 2392.66 9.35 0.00 0.00 53190.69 13643.40 96278.34 00:13:43.342 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x0 length 0xbd0bd 00:13:43.342 nvme2n1 : 5.06 3181.09 12.43 0.00 0.00 40001.97 6374.87 87222.46 00:13:43.342 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:43.342 nvme2n1 : 5.07 2999.06 11.72 0.00 0.00 42389.32 5093.93 55288.55 00:13:43.342 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0x0 length 0xa0000 00:13:43.342 nvme3n1 : 5.07 2585.91 10.10 0.00 0.00 49081.55 14656.23 67680.81 00:13:43.342 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.342 Verification LBA range: start 0xa0000 length 0xa0000 00:13:43.342 nvme3n1 : 5.07 2332.75 9.11 0.00 0.00 54448.49 11975.21 92941.96 00:13:43.342 =================================================================================================================== 00:13:43.342 Total : 31492.18 123.02 0.00 0.00 48496.40 5093.93 105810.85 00:13:44.276 00:13:44.276 real 0m7.138s 00:13:44.276 user 0m9.274s 00:13:44.276 sys 0m3.258s 00:13:44.276 09:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.276 09:51:37 -- common/autotest_common.sh@10 -- # set +x 00:13:44.276 ************************************ 00:13:44.276 END TEST bdev_verify 00:13:44.276 ************************************ 00:13:44.276 09:51:37 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:44.276 09:51:37 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:44.276 09:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:44.276 09:51:37 -- common/autotest_common.sh@10 -- # set +x 00:13:44.276 ************************************ 00:13:44.276 START TEST bdev_verify_big_io 00:13:44.276 ************************************ 00:13:44.276 09:51:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:44.276 [2024-06-10 09:51:37.983029] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:44.276 [2024-06-10 09:51:37.983209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69842 ] 00:13:44.535 [2024-06-10 09:51:38.147628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:44.793 [2024-06-10 09:51:38.336787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.793 [2024-06-10 09:51:38.336787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.357 Running I/O for 5 seconds... 00:13:51.943 00:13:51.943 Latency(us) 00:13:51.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.943 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x0 length 0x2000 00:13:51.943 nvme0n1 : 5.63 263.70 16.48 0.00 0.00 473209.42 52190.49 659649.63 00:13:51.943 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x2000 length 0x2000 00:13:51.943 nvme0n1 : 5.70 259.17 16.20 0.00 0.00 478788.57 51952.17 766413.73 00:13:51.943 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x0 length 0x8000 00:13:51.943 nvme1n1 : 5.58 280.78 17.55 0.00 0.00 436732.10 46709.29 598641.57 00:13:51.943 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x8000 length 0x8000 00:13:51.943 nvme1n1 : 5.62 230.89 14.43 0.00 0.00 531122.82 69587.32 690153.66 00:13:51.943 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x0 length 0x8000 00:13:51.943 nvme1n2 : 5.59 280.51 17.53 0.00 0.00 428370.40 45279.42 720657.69 00:13:51.943 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x8000 length 0x8000 00:13:51.943 nvme1n2 : 5.62 263.01 16.44 0.00 0.00 449105.82 55526.87 526194.50 00:13:51.943 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x0 length 0x8000 00:13:51.943 nvme1n3 : 5.59 264.47 16.53 0.00 0.00 445849.01 56003.49 526194.50 00:13:51.943 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x8000 length 0x8000 00:13:51.943 nvme1n3 : 5.73 257.97 16.12 0.00 0.00 445629.97 46232.67 495690.47 00:13:51.943 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x0 length 0xbd0b 00:13:51.943 nvme2n1 : 5.70 259.48 16.22 0.00 0.00 442630.90 54335.30 632958.60 00:13:51.943 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:51.943 nvme2n1 : 5.76 262.06 16.38 0.00 0.00 430725.62 35985.22 774039.74 00:13:51.943 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0x0 length 0xa000 00:13:51.943 nvme3n1 : 5.71 306.15 19.13 0.00 0.00 370663.23 7566.43 522381.50 00:13:51.943 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.943 Verification LBA range: start 0xa000 length 0xa000 00:13:51.943 nvme3n1 : 5.77 303.06 18.94 0.00 0.00 365678.75 3872.58 594828.57 00:13:51.943 =================================================================================================================== 00:13:51.943 Total : 3231.24 201.95 0.00 0.00 438397.47 3872.58 774039.74 00:13:52.509 00:13:52.509 real 0m8.093s 00:13:52.509 user 0m14.267s 00:13:52.509 sys 0m0.721s 00:13:52.509 09:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.510 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:13:52.510 ************************************ 00:13:52.510 END TEST bdev_verify_big_io 00:13:52.510 ************************************ 00:13:52.510 09:51:46 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.510 09:51:46 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:52.510 09:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.510 09:51:46 -- common/autotest_common.sh@10 -- # set +x 00:13:52.510 ************************************ 00:13:52.510 START TEST bdev_write_zeroes 00:13:52.510 ************************************ 00:13:52.510 09:51:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.510 [2024-06-10 09:51:46.129894] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:52.510 [2024-06-10 09:51:46.130144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69952 ] 00:13:52.768 [2024-06-10 09:51:46.307831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.768 [2024-06-10 09:51:46.493930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.334 Running I/O for 1 seconds... 00:13:54.267 00:13:54.267 Latency(us) 00:13:54.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.267 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:54.267 nvme0n1 : 1.01 9398.49 36.71 0.00 0.00 13608.76 7357.91 30027.40 00:13:54.267 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:54.267 nvme1n1 : 1.01 9386.53 36.67 0.00 0.00 13613.87 7745.16 30027.40 00:13:54.267 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:54.267 nvme1n2 : 1.01 9374.64 36.62 0.00 0.00 13615.39 8102.63 30265.72 00:13:54.267 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:54.267 nvme1n3 : 1.01 9362.88 36.57 0.00 0.00 13622.97 8221.79 30265.72 00:13:54.267 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:54.267 nvme2n1 : 1.02 15168.41 59.25 0.00 0.00 8383.89 3753.43 20614.05 00:13:54.267 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:54.267 nvme3n1 : 1.02 9429.40 36.83 0.00 0.00 13454.29 4200.26 30146.56 00:13:54.267 =================================================================================================================== 00:13:54.267 Total : 62120.36 242.66 0.00 0.00 12308.13 3753.43 30265.72 00:13:55.654 00:13:55.654 real 0m3.103s 00:13:55.654 user 0m2.332s 00:13:55.654 sys 0m0.598s 00:13:55.654 09:51:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.654 09:51:49 -- common/autotest_common.sh@10 -- # set +x 00:13:55.654 ************************************ 00:13:55.654 END TEST bdev_write_zeroes 00:13:55.654 ************************************ 00:13:55.654 09:51:49 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:55.654 09:51:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:55.654 09:51:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:55.654 09:51:49 -- common/autotest_common.sh@10 -- # set +x 00:13:55.654 ************************************ 00:13:55.654 START TEST bdev_json_nonenclosed 00:13:55.654 ************************************ 00:13:55.654 09:51:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:55.654 [2024-06-10 09:51:49.250687] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:55.654 [2024-06-10 09:51:49.250839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70011 ] 00:13:55.945 [2024-06-10 09:51:49.412325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.945 [2024-06-10 09:51:49.636854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.945 [2024-06-10 09:51:49.637072] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:55.945 [2024-06-10 09:51:49.637124] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:56.512 00:13:56.512 real 0m0.866s 00:13:56.512 user 0m0.629s 00:13:56.512 sys 0m0.129s 00:13:56.512 09:51:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.512 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:13:56.512 ************************************ 00:13:56.512 END TEST bdev_json_nonenclosed 00:13:56.512 ************************************ 00:13:56.512 09:51:50 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:56.512 09:51:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:56.512 09:51:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.512 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:13:56.512 ************************************ 00:13:56.512 START TEST bdev_json_nonarray 00:13:56.512 ************************************ 00:13:56.512 09:51:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:56.512 [2024-06-10 09:51:50.166214] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:56.512 [2024-06-10 09:51:50.166363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70042 ] 00:13:56.770 [2024-06-10 09:51:50.331539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.770 [2024-06-10 09:51:50.535443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.770 [2024-06-10 09:51:50.535653] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:56.770 [2024-06-10 09:51:50.535683] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:57.337 00:13:57.337 real 0m0.855s 00:13:57.337 user 0m0.619s 00:13:57.337 sys 0m0.129s 00:13:57.337 09:51:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.337 ************************************ 00:13:57.337 END TEST bdev_json_nonarray 00:13:57.337 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:13:57.337 ************************************ 00:13:57.337 09:51:50 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:57.337 09:51:50 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:57.337 09:51:50 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:57.337 09:51:50 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:57.337 09:51:50 -- bdev/blockdev.sh@809 -- # cleanup 00:13:57.337 09:51:50 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:57.337 09:51:50 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:57.337 09:51:50 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:57.337 09:51:50 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:57.337 09:51:50 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:57.337 09:51:50 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:57.337 09:51:50 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:58.270 lsblk: /dev/nvme0c0n1: not a block device 00:13:58.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.836 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:58.836 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:58.836 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.094 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.094 00:13:59.094 real 1m2.191s 00:13:59.094 user 1m42.286s 00:13:59.094 sys 0m30.376s 00:13:59.094 09:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.094 09:51:52 -- common/autotest_common.sh@10 -- # set +x 00:13:59.094 ************************************ 00:13:59.094 END TEST blockdev_xnvme 00:13:59.094 ************************************ 00:13:59.094 09:51:52 -- spdk/autotest.sh@259 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:59.094 09:51:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:59.094 09:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:59.094 09:51:52 -- common/autotest_common.sh@10 -- # set +x 00:13:59.094 ************************************ 00:13:59.094 START TEST ublk 00:13:59.094 ************************************ 00:13:59.094 09:51:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:59.094 * Looking for test storage... 00:13:59.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:59.094 09:51:52 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:59.094 09:51:52 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:59.094 09:51:52 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:59.094 09:51:52 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:59.094 09:51:52 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:59.094 09:51:52 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:59.094 09:51:52 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:59.094 09:51:52 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:59.094 09:51:52 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:59.094 09:51:52 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:59.094 09:51:52 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:59.094 09:51:52 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:59.094 09:51:52 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:59.094 09:51:52 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:59.094 09:51:52 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:59.094 09:51:52 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:59.094 09:51:52 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:59.094 09:51:52 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:59.094 09:51:52 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:59.094 09:51:52 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:59.094 09:51:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:59.094 09:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:59.094 09:51:52 -- common/autotest_common.sh@10 -- # set +x 00:13:59.095 ************************************ 00:13:59.095 START TEST test_save_ublk_config 00:13:59.095 ************************************ 00:13:59.095 09:51:52 -- common/autotest_common.sh@1104 -- # test_save_config 00:13:59.095 09:51:52 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:59.095 09:51:52 -- ublk/ublk.sh@103 -- # tgtpid=70358 00:13:59.095 09:51:52 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:59.095 09:51:52 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:59.095 09:51:52 -- ublk/ublk.sh@106 -- # waitforlisten 70358 00:13:59.095 09:51:52 -- common/autotest_common.sh@819 -- # '[' -z 70358 ']' 00:13:59.095 09:51:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.095 09:51:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:59.095 09:51:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.095 09:51:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:59.095 09:51:52 -- common/autotest_common.sh@10 -- # set +x 00:13:59.353 [2024-06-10 09:51:52.936593] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:59.353 [2024-06-10 09:51:52.936751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70358 ] 00:13:59.353 [2024-06-10 09:51:53.103764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.612 [2024-06-10 09:51:53.299905] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:59.612 [2024-06-10 09:51:53.300179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.991 09:51:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:00.991 09:51:54 -- common/autotest_common.sh@852 -- # return 0 00:14:00.991 09:51:54 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:00.991 09:51:54 -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:00.991 09:51:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.991 09:51:54 -- common/autotest_common.sh@10 -- # set +x 00:14:00.991 [2024-06-10 09:51:54.651273] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:00.991 malloc0 00:14:00.991 [2024-06-10 09:51:54.728349] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:00.991 [2024-06-10 09:51:54.728462] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:00.991 [2024-06-10 09:51:54.728476] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:00.991 [2024-06-10 09:51:54.728489] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.991 [2024-06-10 09:51:54.737299] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.991 [2024-06-10 09:51:54.737339] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.991 [2024-06-10 09:51:54.744138] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.991 [2024-06-10 09:51:54.744289] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:01.252 [2024-06-10 09:51:54.761626] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:01.252 0 00:14:01.252 09:51:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.252 09:51:54 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:01.252 09:51:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.252 09:51:54 -- common/autotest_common.sh@10 -- # set +x 00:14:01.252 09:51:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.252 09:51:55 -- ublk/ublk.sh@115 -- # config='{ 00:14:01.252 "subsystems": [ 00:14:01.252 { 00:14:01.252 "subsystem": "iobuf", 00:14:01.252 "config": [ 00:14:01.252 { 00:14:01.252 "method": "iobuf_set_options", 00:14:01.252 "params": { 00:14:01.252 "small_pool_count": 8192, 00:14:01.252 "large_pool_count": 1024, 00:14:01.252 "small_bufsize": 8192, 00:14:01.252 "large_bufsize": 135168 00:14:01.252 } 00:14:01.252 } 00:14:01.252 ] 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "subsystem": "sock", 00:14:01.252 "config": [ 00:14:01.252 { 00:14:01.252 "method": "sock_impl_set_options", 00:14:01.252 "params": { 00:14:01.252 "impl_name": "posix", 00:14:01.252 "recv_buf_size": 2097152, 00:14:01.252 "send_buf_size": 2097152, 00:14:01.252 "enable_recv_pipe": true, 00:14:01.252 "enable_quickack": false, 00:14:01.252 "enable_placement_id": 0, 00:14:01.252 "enable_zerocopy_send_server": true, 00:14:01.252 "enable_zerocopy_send_client": false, 00:14:01.252 "zerocopy_threshold": 0, 00:14:01.252 "tls_version": 0, 00:14:01.252 "enable_ktls": false 00:14:01.252 } 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "method": "sock_impl_set_options", 00:14:01.252 "params": { 00:14:01.252 "impl_name": "ssl", 00:14:01.252 "recv_buf_size": 4096, 00:14:01.252 "send_buf_size": 4096, 00:14:01.252 "enable_recv_pipe": true, 00:14:01.252 "enable_quickack": false, 00:14:01.252 "enable_placement_id": 0, 00:14:01.252 "enable_zerocopy_send_server": true, 00:14:01.252 "enable_zerocopy_send_client": false, 00:14:01.252 "zerocopy_threshold": 0, 00:14:01.252 "tls_version": 0, 00:14:01.252 "enable_ktls": false 00:14:01.252 } 00:14:01.252 } 00:14:01.252 ] 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "subsystem": "vmd", 00:14:01.252 "config": [] 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "subsystem": "accel", 00:14:01.252 "config": [ 00:14:01.252 { 00:14:01.252 "method": "accel_set_options", 00:14:01.252 "params": { 00:14:01.252 "small_cache_size": 128, 00:14:01.252 "large_cache_size": 16, 00:14:01.252 "task_count": 2048, 00:14:01.252 "sequence_count": 2048, 00:14:01.252 "buf_count": 2048 00:14:01.252 } 00:14:01.252 } 00:14:01.252 ] 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "subsystem": "bdev", 00:14:01.252 "config": [ 00:14:01.252 { 00:14:01.252 "method": "bdev_set_options", 00:14:01.252 "params": { 00:14:01.252 "bdev_io_pool_size": 65535, 00:14:01.252 "bdev_io_cache_size": 256, 00:14:01.252 "bdev_auto_examine": true, 00:14:01.252 "iobuf_small_cache_size": 128, 00:14:01.252 "iobuf_large_cache_size": 16 00:14:01.252 } 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "method": "bdev_raid_set_options", 00:14:01.252 "params": { 00:14:01.252 "process_window_size_kb": 1024 00:14:01.252 } 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "method": "bdev_iscsi_set_options", 00:14:01.252 "params": { 00:14:01.252 "timeout_sec": 30 00:14:01.252 } 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "method": "bdev_nvme_set_options", 00:14:01.252 "params": { 00:14:01.252 "action_on_timeout": "none", 00:14:01.252 "timeout_us": 0, 00:14:01.252 "timeout_admin_us": 0, 00:14:01.252 "keep_alive_timeout_ms": 10000, 00:14:01.252 "transport_retry_count": 4, 00:14:01.252 "arbitration_burst": 0, 00:14:01.252 "low_priority_weight": 0, 00:14:01.252 "medium_priority_weight": 0, 00:14:01.252 "high_priority_weight": 0, 00:14:01.252 "nvme_adminq_poll_period_us": 10000, 00:14:01.252 "nvme_ioq_poll_period_us": 0, 00:14:01.252 "io_queue_requests": 0, 00:14:01.252 "delay_cmd_submit": true, 00:14:01.252 "bdev_retry_count": 3, 00:14:01.252 "transport_ack_timeout": 0, 00:14:01.252 "ctrlr_loss_timeout_sec": 0, 00:14:01.252 "reconnect_delay_sec": 0, 00:14:01.252 "fast_io_fail_timeout_sec": 0, 00:14:01.252 "generate_uuids": false, 00:14:01.252 "transport_tos": 0, 00:14:01.252 "io_path_stat": false, 00:14:01.252 "allow_accel_sequence": false 00:14:01.252 } 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "method": "bdev_nvme_set_hotplug", 00:14:01.252 "params": { 00:14:01.252 "period_us": 100000, 00:14:01.252 "enable": false 00:14:01.252 } 00:14:01.252 }, 00:14:01.252 { 00:14:01.252 "method": "bdev_malloc_create", 00:14:01.252 "params": { 00:14:01.252 "name": "malloc0", 00:14:01.252 "num_blocks": 8192, 00:14:01.252 "block_size": 4096, 00:14:01.252 "physical_block_size": 4096, 00:14:01.253 "uuid": "25098c3d-b4c2-499d-a215-19fdb008e4b2", 00:14:01.253 "optimal_io_boundary": 0 00:14:01.253 } 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "method": "bdev_wait_for_examine" 00:14:01.253 } 00:14:01.253 ] 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "subsystem": "scsi", 00:14:01.253 "config": null 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "subsystem": "scheduler", 00:14:01.253 "config": [ 00:14:01.253 { 00:14:01.253 "method": "framework_set_scheduler", 00:14:01.253 "params": { 00:14:01.253 "name": "static" 00:14:01.253 } 00:14:01.253 } 00:14:01.253 ] 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "subsystem": "vhost_scsi", 00:14:01.253 "config": [] 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "subsystem": "vhost_blk", 00:14:01.253 "config": [] 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "subsystem": "ublk", 00:14:01.253 "config": [ 00:14:01.253 { 00:14:01.253 "method": "ublk_create_target", 00:14:01.253 "params": { 00:14:01.253 "cpumask": "1" 00:14:01.253 } 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "method": "ublk_start_disk", 00:14:01.253 "params": { 00:14:01.253 "bdev_name": "malloc0", 00:14:01.253 "ublk_id": 0, 00:14:01.253 "num_queues": 1, 00:14:01.253 "queue_depth": 128 00:14:01.253 } 00:14:01.253 } 00:14:01.253 ] 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "subsystem": "nbd", 00:14:01.253 "config": [] 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "subsystem": "nvmf", 00:14:01.253 "config": [ 00:14:01.253 { 00:14:01.253 "method": "nvmf_set_config", 00:14:01.253 "params": { 00:14:01.253 "discovery_filter": "match_any", 00:14:01.253 "admin_cmd_passthru": { 00:14:01.253 "identify_ctrlr": false 00:14:01.253 } 00:14:01.253 } 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "method": "nvmf_set_max_subsystems", 00:14:01.253 "params": { 00:14:01.253 "max_subsystems": 1024 00:14:01.253 } 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "method": "nvmf_set_crdt", 00:14:01.253 "params": { 00:14:01.253 "crdt1": 0, 00:14:01.253 "crdt2": 0, 00:14:01.253 "crdt3": 0 00:14:01.253 } 00:14:01.253 } 00:14:01.253 ] 00:14:01.253 }, 00:14:01.253 { 00:14:01.253 "subsystem": "iscsi", 00:14:01.253 "config": [ 00:14:01.253 { 00:14:01.253 "method": "iscsi_set_options", 00:14:01.253 "params": { 00:14:01.253 "node_base": "iqn.2016-06.io.spdk", 00:14:01.253 "max_sessions": 128, 00:14:01.253 "max_connections_per_session": 2, 00:14:01.253 "max_queue_depth": 64, 00:14:01.253 "default_time2wait": 2, 00:14:01.253 "default_time2retain": 20, 00:14:01.253 "first_burst_length": 8192, 00:14:01.253 "immediate_data": true, 00:14:01.253 "allow_duplicated_isid": false, 00:14:01.253 "error_recovery_level": 0, 00:14:01.253 "nop_timeout": 60, 00:14:01.253 "nop_in_interval": 30, 00:14:01.253 "disable_chap": false, 00:14:01.253 "require_chap": false, 00:14:01.253 "mutual_chap": false, 00:14:01.253 "chap_group": 0, 00:14:01.253 "max_large_datain_per_connection": 64, 00:14:01.253 "max_r2t_per_connection": 4, 00:14:01.253 "pdu_pool_size": 36864, 00:14:01.253 "immediate_data_pool_size": 16384, 00:14:01.253 "data_out_pool_size": 2048 00:14:01.253 } 00:14:01.253 } 00:14:01.253 ] 00:14:01.253 } 00:14:01.253 ] 00:14:01.253 }' 00:14:01.253 09:51:55 -- ublk/ublk.sh@116 -- # killprocess 70358 00:14:01.253 09:51:55 -- common/autotest_common.sh@926 -- # '[' -z 70358 ']' 00:14:01.253 09:51:55 -- common/autotest_common.sh@930 -- # kill -0 70358 00:14:01.253 09:51:55 -- common/autotest_common.sh@931 -- # uname 00:14:01.253 09:51:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:01.253 09:51:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70358 00:14:01.513 09:51:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:01.513 killing process with pid 70358 00:14:01.513 09:51:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:01.513 09:51:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70358' 00:14:01.513 09:51:55 -- common/autotest_common.sh@945 -- # kill 70358 00:14:01.513 09:51:55 -- common/autotest_common.sh@950 -- # wait 70358 00:14:03.415 [2024-06-10 09:51:56.776637] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:03.415 [2024-06-10 09:51:56.813228] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:03.415 [2024-06-10 09:51:56.813451] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:03.415 [2024-06-10 09:51:56.817333] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:03.415 [2024-06-10 09:51:56.817398] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:03.415 [2024-06-10 09:51:56.817411] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:03.415 [2024-06-10 09:51:56.817448] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:03.415 [2024-06-10 09:51:56.821308] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:04.352 09:51:58 -- ublk/ublk.sh@119 -- # tgtpid=70429 00:14:04.352 09:51:58 -- ublk/ublk.sh@121 -- # waitforlisten 70429 00:14:04.352 09:51:58 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:04.352 09:51:58 -- common/autotest_common.sh@819 -- # '[' -z 70429 ']' 00:14:04.352 09:51:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.352 09:51:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:04.352 09:51:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.352 09:51:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:04.352 09:51:58 -- ublk/ublk.sh@118 -- # echo '{ 00:14:04.352 "subsystems": [ 00:14:04.352 { 00:14:04.352 "subsystem": "iobuf", 00:14:04.352 "config": [ 00:14:04.352 { 00:14:04.352 "method": "iobuf_set_options", 00:14:04.352 "params": { 00:14:04.352 "small_pool_count": 8192, 00:14:04.352 "large_pool_count": 1024, 00:14:04.352 "small_bufsize": 8192, 00:14:04.352 "large_bufsize": 135168 00:14:04.352 } 00:14:04.352 } 00:14:04.352 ] 00:14:04.352 }, 00:14:04.352 { 00:14:04.352 "subsystem": "sock", 00:14:04.352 "config": [ 00:14:04.352 { 00:14:04.352 "method": "sock_impl_set_options", 00:14:04.352 "params": { 00:14:04.352 "impl_name": "posix", 00:14:04.352 "recv_buf_size": 2097152, 00:14:04.352 "send_buf_size": 2097152, 00:14:04.352 "enable_recv_pipe": true, 00:14:04.352 "enable_quickack": false, 00:14:04.352 "enable_placement_id": 0, 00:14:04.352 "enable_zerocopy_send_server": true, 00:14:04.352 "enable_zerocopy_send_client": false, 00:14:04.352 "zerocopy_threshold": 0, 00:14:04.352 "tls_version": 0, 00:14:04.352 "enable_ktls": false 00:14:04.352 } 00:14:04.352 }, 00:14:04.352 { 00:14:04.352 "method": "sock_impl_set_options", 00:14:04.352 "params": { 00:14:04.352 "impl_name": "ssl", 00:14:04.352 "recv_buf_size": 4096, 00:14:04.352 "send_buf_size": 4096, 00:14:04.352 "enable_recv_pipe": true, 00:14:04.352 "enable_quickack": false, 00:14:04.352 "enable_placement_id": 0, 00:14:04.352 "enable_zerocopy_send_server": true, 00:14:04.352 "enable_zerocopy_send_client": false, 00:14:04.352 "zerocopy_threshold": 0, 00:14:04.352 "tls_version": 0, 00:14:04.352 "enable_ktls": false 00:14:04.352 } 00:14:04.352 } 00:14:04.352 ] 00:14:04.352 }, 00:14:04.352 { 00:14:04.352 "subsystem": "vmd", 00:14:04.352 "config": [] 00:14:04.352 }, 00:14:04.352 { 00:14:04.352 "subsystem": "accel", 00:14:04.352 "config": [ 00:14:04.352 { 00:14:04.352 "method": "accel_set_options", 00:14:04.352 "params": { 00:14:04.352 "small_cache_size": 128, 00:14:04.352 "large_cache_size": 16, 00:14:04.352 "task_count": 2048, 00:14:04.352 "sequence_count": 2048, 00:14:04.352 "buf_count": 2048 00:14:04.352 } 00:14:04.352 } 00:14:04.352 ] 00:14:04.352 }, 00:14:04.352 { 00:14:04.352 "subsystem": "bdev", 00:14:04.352 "config": [ 00:14:04.352 { 00:14:04.352 "method": "bdev_set_options", 00:14:04.352 "params": { 00:14:04.352 "bdev_io_pool_size": 65535, 00:14:04.352 "bdev_io_cache_size": 256, 00:14:04.352 "bdev_auto_examine": true, 00:14:04.352 "iobuf_small_cache_size": 128, 00:14:04.352 "iobuf_large_cache_size": 16 00:14:04.352 } 00:14:04.352 }, 00:14:04.352 { 00:14:04.352 "method": "bdev_raid_set_options", 00:14:04.352 "params": { 00:14:04.352 "process_window_size_kb": 1024 00:14:04.352 } 00:14:04.352 }, 00:14:04.352 { 00:14:04.352 "method": "bdev_iscsi_set_options", 00:14:04.352 "params": { 00:14:04.352 "timeout_sec": 30 00:14:04.352 } 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "method": "bdev_nvme_set_options", 00:14:04.353 "params": { 00:14:04.353 "action_on_timeout": "none", 00:14:04.353 "timeout_us": 0, 00:14:04.353 "timeout_admin_us": 0, 00:14:04.353 "keep_alive_timeout_ms": 10000, 00:14:04.353 "transport_retry_count": 4, 00:14:04.353 "arbitration_burst": 0, 00:14:04.353 "low_priority_weight": 0, 00:14:04.353 "medium_priority_weight": 0, 00:14:04.353 "high_priority_weight": 0, 00:14:04.353 "nvme_adminq_poll_period_us": 10000, 00:14:04.353 "nvme_ioq_poll_period_us": 0, 00:14:04.353 "io_queue_requests": 0, 00:14:04.353 "delay_cmd_submit": true, 00:14:04.353 "bdev_retry_count": 3, 00:14:04.353 "transport_ack_timeout": 0, 00:14:04.353 "ctrlr_loss_timeout_sec": 0, 00:14:04.353 "reconnect_delay_sec": 0, 00:14:04.353 "fast_io_fail_timeout_sec": 0, 00:14:04.353 "generate_uuids": false, 00:14:04.353 "transport_tos": 0, 00:14:04.353 "io_path_stat": false, 00:14:04.353 "allow_accel_sequence": false 00:14:04.353 } 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "method": "bdev_nvme_set_hotplug", 00:14:04.353 "params": { 00:14:04.353 "period_us": 100000, 00:14:04.353 "enable": false 00:14:04.353 } 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "method": "bdev_malloc_create", 00:14:04.353 "params": { 00:14:04.353 "name": "malloc0", 00:14:04.353 "num_blocks": 8192, 00:14:04.353 "block_size": 4096, 00:14:04.353 "physical_block_size": 4096, 00:14:04.353 "uuid": "25098c3d-b4c2-499d-a215-19fdb008e4b2", 00:14:04.353 "optimal_io_boundary": 0 00:14:04.353 } 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "method": "bdev_wait_for_examine" 00:14:04.353 } 00:14:04.353 ] 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "subsystem": "scsi", 00:14:04.353 "config": null 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "subsystem": "scheduler", 00:14:04.353 "config": [ 00:14:04.353 { 00:14:04.353 "method": "framework_set_scheduler", 00:14:04.353 "params": { 00:14:04.353 "name": "static" 00:14:04.353 } 00:14:04.353 } 00:14:04.353 ] 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "subsystem": "vhost_scsi", 00:14:04.353 "config": [] 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "subsystem": "vhost_blk", 00:14:04.353 "config": [] 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "subsystem": "ublk", 00:14:04.353 "config": [ 00:14:04.353 { 00:14:04.353 "method": "ublk_create_target", 00:14:04.353 "params": { 00:14:04.353 "cpumask": "1" 00:14:04.353 } 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "method": "ublk_start_disk", 00:14:04.353 "params": { 00:14:04.353 "bdev_name": "malloc0", 00:14:04.353 "ublk_id": 0, 00:14:04.353 "num_queues": 1, 00:14:04.353 "queue_depth": 128 00:14:04.353 } 00:14:04.353 } 00:14:04.353 ] 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "subsystem": "nbd", 00:14:04.353 "config": [] 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "subsystem": "nvmf", 00:14:04.353 "config": [ 00:14:04.353 { 00:14:04.353 "method": "nvmf_set_config", 00:14:04.353 "params": { 00:14:04.353 "discovery_filter": "match_any", 00:14:04.353 "admin_cmd_passthru": { 00:14:04.353 "identify_ctrlr": false 00:14:04.353 } 00:14:04.353 } 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "method": "nvmf_set_max_subsystems", 00:14:04.353 "params": { 00:14:04.353 "max_subsystems": 1024 00:14:04.353 } 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "method": "nvmf_set_crdt", 00:14:04.353 "params": { 00:14:04.353 "crdt1": 0, 00:14:04.353 "crdt2": 0, 00:14:04.353 "crdt3": 0 00:14:04.353 } 00:14:04.353 } 00:14:04.353 ] 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "subsystem": "iscsi", 00:14:04.353 "config": [ 00:14:04.353 { 00:14:04.353 "method": "iscsi_set_options", 00:14:04.353 "params": { 00:14:04.353 "node_base": "iqn.2016-06.io.spdk", 00:14:04.353 "max_sessions": 128, 00:14:04.353 "max_connections_per_session": 2, 00:14:04.353 "max_queue_depth": 64, 00:14:04.353 "default_time2wait": 2, 00:14:04.353 "default_time2retain": 20, 00:14:04.353 "first_burst_length": 8192, 00:14:04.353 "immediate_data": true, 00:14:04.353 "allow_duplicated_isid": false, 00:14:04.353 "error_recovery_level": 0, 00:14:04.353 "nop_timeout": 60, 00:14:04.353 "nop_in_interval": 30, 00:14:04.353 "disable_chap": false, 00:14:04.353 "require_chap": false, 00:14:04.353 "mutual_chap": false, 00:14:04.353 "chap_group": 0, 00:14:04.353 "max_large_datain_per_connection": 64, 00:14:04.353 "max_r2t_per_connection": 4, 00:14:04.353 "pdu_pool_size": 36864, 00:14:04.353 "immediate_data_pool_size": 16384, 00:14:04.353 "data_out_pool_size": 2048 00:14:04.353 } 00:14:04.353 } 00:14:04.353 ] 00:14:04.353 } 00:14:04.353 ] 00:14:04.353 }' 00:14:04.353 09:51:58 -- common/autotest_common.sh@10 -- # set +x 00:14:04.612 [2024-06-10 09:51:58.151218] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:04.612 [2024-06-10 09:51:58.151384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70429 ] 00:14:04.612 [2024-06-10 09:51:58.324282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.873 [2024-06-10 09:51:58.526792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:04.874 [2024-06-10 09:51:58.527033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.814 [2024-06-10 09:51:59.395268] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:05.814 [2024-06-10 09:51:59.402305] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:05.814 [2024-06-10 09:51:59.402404] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:05.814 [2024-06-10 09:51:59.402419] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:05.814 [2024-06-10 09:51:59.402429] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.814 [2024-06-10 09:51:59.410293] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.814 [2024-06-10 09:51:59.410320] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.814 [2024-06-10 09:51:59.417161] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.814 [2024-06-10 09:51:59.417301] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:05.814 [2024-06-10 09:51:59.434133] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.382 09:51:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:06.382 09:51:59 -- common/autotest_common.sh@852 -- # return 0 00:14:06.382 09:51:59 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:06.382 09:51:59 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:06.382 09:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.382 09:51:59 -- common/autotest_common.sh@10 -- # set +x 00:14:06.382 09:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.382 09:51:59 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:06.382 09:51:59 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:06.382 09:51:59 -- ublk/ublk.sh@125 -- # killprocess 70429 00:14:06.382 09:51:59 -- common/autotest_common.sh@926 -- # '[' -z 70429 ']' 00:14:06.382 09:51:59 -- common/autotest_common.sh@930 -- # kill -0 70429 00:14:06.382 09:51:59 -- common/autotest_common.sh@931 -- # uname 00:14:06.382 09:51:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:06.382 09:51:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70429 00:14:06.382 09:51:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:06.382 killing process with pid 70429 00:14:06.382 09:51:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:06.382 09:51:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70429' 00:14:06.382 09:51:59 -- common/autotest_common.sh@945 -- # kill 70429 00:14:06.382 09:51:59 -- common/autotest_common.sh@950 -- # wait 70429 00:14:07.757 [2024-06-10 09:52:01.169786] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:07.757 [2024-06-10 09:52:01.205164] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:07.757 [2024-06-10 09:52:01.209147] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:07.757 [2024-06-10 09:52:01.219184] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:07.757 [2024-06-10 09:52:01.219292] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:07.757 [2024-06-10 09:52:01.219307] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:07.757 [2024-06-10 09:52:01.219356] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:07.757 [2024-06-10 09:52:01.219580] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:09.131 09:52:02 -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:09.131 00:14:09.131 real 0m9.637s 00:14:09.131 user 0m8.621s 00:14:09.131 sys 0m2.443s 00:14:09.131 09:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.131 09:52:02 -- common/autotest_common.sh@10 -- # set +x 00:14:09.131 ************************************ 00:14:09.131 END TEST test_save_ublk_config 00:14:09.131 ************************************ 00:14:09.131 09:52:02 -- ublk/ublk.sh@139 -- # spdk_pid=70509 00:14:09.131 09:52:02 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:09.131 09:52:02 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.131 09:52:02 -- ublk/ublk.sh@141 -- # waitforlisten 70509 00:14:09.131 09:52:02 -- common/autotest_common.sh@819 -- # '[' -z 70509 ']' 00:14:09.131 09:52:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.131 09:52:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:09.131 09:52:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.131 09:52:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:09.131 09:52:02 -- common/autotest_common.sh@10 -- # set +x 00:14:09.131 [2024-06-10 09:52:02.632475] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:09.131 [2024-06-10 09:52:02.632606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70509 ] 00:14:09.131 [2024-06-10 09:52:02.792000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.389 [2024-06-10 09:52:02.976691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:09.390 [2024-06-10 09:52:02.977126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.390 [2024-06-10 09:52:02.977150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.767 09:52:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:10.767 09:52:04 -- common/autotest_common.sh@852 -- # return 0 00:14:10.767 09:52:04 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:10.767 09:52:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:10.767 09:52:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.767 09:52:04 -- common/autotest_common.sh@10 -- # set +x 00:14:10.767 ************************************ 00:14:10.767 START TEST test_create_ublk 00:14:10.767 ************************************ 00:14:10.767 09:52:04 -- common/autotest_common.sh@1104 -- # test_create_ublk 00:14:10.767 09:52:04 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:10.767 09:52:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.767 09:52:04 -- common/autotest_common.sh@10 -- # set +x 00:14:10.767 [2024-06-10 09:52:04.295491] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:10.767 09:52:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.767 09:52:04 -- ublk/ublk.sh@33 -- # ublk_target= 00:14:10.767 09:52:04 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:10.767 09:52:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.767 09:52:04 -- common/autotest_common.sh@10 -- # set +x 00:14:10.767 09:52:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.767 09:52:04 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:10.767 09:52:04 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:10.767 09:52:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.767 09:52:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.026 [2024-06-10 09:52:04.541301] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:11.026 [2024-06-10 09:52:04.541779] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:11.026 [2024-06-10 09:52:04.541806] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:11.026 [2024-06-10 09:52:04.541820] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.026 [2024-06-10 09:52:04.550394] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.026 [2024-06-10 09:52:04.550451] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.026 [2024-06-10 09:52:04.557163] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.026 [2024-06-10 09:52:04.576423] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:11.026 [2024-06-10 09:52:04.592150] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.026 09:52:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.026 09:52:04 -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:11.026 09:52:04 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:11.026 09:52:04 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:11.026 09:52:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.026 09:52:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.026 09:52:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.026 09:52:04 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:11.026 { 00:14:11.026 "ublk_device": "/dev/ublkb0", 00:14:11.026 "id": 0, 00:14:11.026 "queue_depth": 512, 00:14:11.026 "num_queues": 4, 00:14:11.026 "bdev_name": "Malloc0" 00:14:11.026 } 00:14:11.026 ]' 00:14:11.026 09:52:04 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:11.026 09:52:04 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:11.026 09:52:04 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:11.026 09:52:04 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:11.026 09:52:04 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:11.026 09:52:04 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:11.026 09:52:04 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:11.298 09:52:04 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:11.298 09:52:04 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:11.298 09:52:04 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:11.298 09:52:04 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:11.298 09:52:04 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:11.298 09:52:04 -- lvol/common.sh@41 -- # local offset=0 00:14:11.298 09:52:04 -- lvol/common.sh@42 -- # local size=134217728 00:14:11.298 09:52:04 -- lvol/common.sh@43 -- # local rw=write 00:14:11.298 09:52:04 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:11.298 09:52:04 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:11.298 09:52:04 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:11.298 09:52:04 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:11.298 09:52:04 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:11.298 09:52:04 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:11.298 09:52:04 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:11.298 fio: verification read phase will never start because write phase uses all of runtime 00:14:11.298 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:11.298 fio-3.35 00:14:11.298 Starting 1 process 00:14:23.573 00:14:23.573 fio_test: (groupid=0, jobs=1): err= 0: pid=70573: Mon Jun 10 09:52:15 2024 00:14:23.573 write: IOPS=10.3k, BW=40.4MiB/s (42.4MB/s)(404MiB/10000msec); 0 zone resets 00:14:23.573 clat (usec): min=53, max=9769, avg=95.11, stdev=159.19 00:14:23.573 lat (usec): min=54, max=9789, avg=95.89, stdev=159.22 00:14:23.573 clat percentiles (usec): 00:14:23.573 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:14:23.573 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 84], 00:14:23.573 | 70.00th=[ 86], 80.00th=[ 92], 90.00th=[ 99], 95.00th=[ 111], 00:14:23.573 | 99.00th=[ 133], 99.50th=[ 157], 99.90th=[ 3261], 99.95th=[ 3556], 00:14:23.573 | 99.99th=[ 3720] 00:14:23.573 bw ( KiB/s): min=17656, max=44080, per=99.96%, avg=41386.11, stdev=5898.63, samples=19 00:14:23.573 iops : min= 4414, max=11020, avg=10346.47, stdev=1474.66, samples=19 00:14:23.573 lat (usec) : 100=90.97%, 250=8.57%, 500=0.02%, 750=0.03%, 1000=0.05% 00:14:23.573 lat (msec) : 2=0.11%, 4=0.25%, 10=0.01% 00:14:23.573 cpu : usr=3.01%, sys=7.57%, ctx=103514, majf=0, minf=796 00:14:23.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.573 issued rwts: total=0,103506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:23.573 00:14:23.573 Run status group 0 (all jobs): 00:14:23.573 WRITE: bw=40.4MiB/s (42.4MB/s), 40.4MiB/s-40.4MiB/s (42.4MB/s-42.4MB/s), io=404MiB (424MB), run=10000-10000msec 00:14:23.573 00:14:23.573 Disk stats (read/write): 00:14:23.573 ublkb0: ios=0/102446, merge=0/0, ticks=0/8903, in_queue=8903, util=99.08% 00:14:23.573 09:52:15 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:23.573 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 [2024-06-10 09:52:15.131960] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:23.573 [2024-06-10 09:52:15.161623] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:23.573 [2024-06-10 09:52:15.166332] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:23.573 [2024-06-10 09:52:15.174567] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:23.573 [2024-06-10 09:52:15.174933] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:23.573 [2024-06-10 09:52:15.174956] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:23.573 09:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.573 09:52:15 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:23.573 09:52:15 -- common/autotest_common.sh@640 -- # local es=0 00:14:23.573 09:52:15 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:23.573 09:52:15 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:14:23.573 09:52:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:23.573 09:52:15 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:14:23.573 09:52:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:23.573 09:52:15 -- common/autotest_common.sh@643 -- # rpc_cmd ublk_stop_disk 0 00:14:23.573 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 [2024-06-10 09:52:15.187299] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:23.573 request: 00:14:23.573 { 00:14:23.573 "ublk_id": 0, 00:14:23.573 "method": "ublk_stop_disk", 00:14:23.573 "req_id": 1 00:14:23.573 } 00:14:23.573 Got JSON-RPC error response 00:14:23.573 response: 00:14:23.573 { 00:14:23.573 "code": -19, 00:14:23.573 "message": "No such device" 00:14:23.573 } 00:14:23.573 09:52:15 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:23.573 09:52:15 -- common/autotest_common.sh@643 -- # es=1 00:14:23.573 09:52:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:23.573 09:52:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:23.573 09:52:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:23.573 09:52:15 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:23.573 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 [2024-06-10 09:52:15.200238] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:23.573 [2024-06-10 09:52:15.208134] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:23.573 [2024-06-10 09:52:15.208188] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:23.573 09:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.573 09:52:15 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:23.573 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 09:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.573 09:52:15 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:23.573 09:52:15 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:23.573 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 09:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.573 09:52:15 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:23.573 09:52:15 -- lvol/common.sh@26 -- # jq length 00:14:23.573 09:52:15 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:23.573 09:52:15 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:23.573 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 09:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.573 09:52:15 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:23.573 09:52:15 -- lvol/common.sh@28 -- # jq length 00:14:23.573 ************************************ 00:14:23.573 END TEST test_create_ublk 00:14:23.573 ************************************ 00:14:23.573 09:52:15 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:23.573 00:14:23.573 real 0m11.350s 00:14:23.573 user 0m0.752s 00:14:23.573 sys 0m0.861s 00:14:23.573 09:52:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 09:52:15 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:23.573 09:52:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:23.573 09:52:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 ************************************ 00:14:23.573 START TEST test_create_multi_ublk 00:14:23.573 ************************************ 00:14:23.573 09:52:15 -- common/autotest_common.sh@1104 -- # test_create_multi_ublk 00:14:23.573 09:52:15 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:23.573 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 [2024-06-10 09:52:15.695392] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:23.573 09:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.573 09:52:15 -- ublk/ublk.sh@62 -- # ublk_target= 00:14:23.573 09:52:15 -- ublk/ublk.sh@64 -- # seq 0 3 00:14:23.573 09:52:15 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.573 09:52:15 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:23.573 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.573 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.573 09:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.573 09:52:15 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:23.573 09:52:15 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:23.573 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.574 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.574 [2024-06-10 09:52:15.933344] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:23.574 [2024-06-10 09:52:15.933852] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:23.574 [2024-06-10 09:52:15.933875] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:23.574 [2024-06-10 09:52:15.933889] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:23.574 [2024-06-10 09:52:15.942351] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:23.574 [2024-06-10 09:52:15.942406] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:23.574 [2024-06-10 09:52:15.949181] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:23.574 [2024-06-10 09:52:15.949998] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:23.574 [2024-06-10 09:52:15.965282] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:23.574 09:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.574 09:52:15 -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:23.574 09:52:15 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.574 09:52:15 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:23.574 09:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.574 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:14:23.574 09:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:23.574 09:52:16 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:23.574 09:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.574 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:14:23.574 [2024-06-10 09:52:16.213308] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:23.574 [2024-06-10 09:52:16.213824] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:23.574 [2024-06-10 09:52:16.213863] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:23.574 [2024-06-10 09:52:16.213874] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:23.574 [2024-06-10 09:52:16.222371] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:23.574 [2024-06-10 09:52:16.222414] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:23.574 [2024-06-10 09:52:16.229153] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:23.574 [2024-06-10 09:52:16.229907] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:23.574 [2024-06-10 09:52:16.232913] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:23.574 09:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:23.574 09:52:16 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.574 09:52:16 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:23.574 09:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.574 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:14:23.574 09:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:23.574 09:52:16 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:23.574 09:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.574 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:14:23.574 [2024-06-10 09:52:16.486362] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:23.574 [2024-06-10 09:52:16.486853] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:23.574 [2024-06-10 09:52:16.486869] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:23.574 [2024-06-10 09:52:16.486884] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:23.574 [2024-06-10 09:52:16.490782] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:23.574 [2024-06-10 09:52:16.490934] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:23.574 [2024-06-10 09:52:16.505154] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:23.574 [2024-06-10 09:52:16.506146] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:23.574 [2024-06-10 09:52:16.534153] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:23.574 09:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:23.574 09:52:16 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.574 09:52:16 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:23.574 09:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.574 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:14:23.574 09:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:23.574 09:52:16 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:23.574 09:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.574 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:14:23.574 [2024-06-10 09:52:16.789313] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:23.574 [2024-06-10 09:52:16.789805] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:23.574 [2024-06-10 09:52:16.789831] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:23.574 [2024-06-10 09:52:16.789842] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:23.574 [2024-06-10 09:52:16.797176] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:23.574 [2024-06-10 09:52:16.797210] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:23.574 [2024-06-10 09:52:16.805169] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:23.574 [2024-06-10 09:52:16.805913] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:23.574 [2024-06-10 09:52:16.808795] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:23.574 09:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:23.574 09:52:16 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:23.574 09:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.574 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:14:23.574 09:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:23.574 { 00:14:23.574 "ublk_device": "/dev/ublkb0", 00:14:23.574 "id": 0, 00:14:23.574 "queue_depth": 512, 00:14:23.574 "num_queues": 4, 00:14:23.574 "bdev_name": "Malloc0" 00:14:23.574 }, 00:14:23.574 { 00:14:23.574 "ublk_device": "/dev/ublkb1", 00:14:23.574 "id": 1, 00:14:23.574 "queue_depth": 512, 00:14:23.574 "num_queues": 4, 00:14:23.574 "bdev_name": "Malloc1" 00:14:23.574 }, 00:14:23.574 { 00:14:23.574 "ublk_device": "/dev/ublkb2", 00:14:23.574 "id": 2, 00:14:23.574 "queue_depth": 512, 00:14:23.574 "num_queues": 4, 00:14:23.574 "bdev_name": "Malloc2" 00:14:23.574 }, 00:14:23.574 { 00:14:23.574 "ublk_device": "/dev/ublkb3", 00:14:23.574 "id": 3, 00:14:23.574 "queue_depth": 512, 00:14:23.574 "num_queues": 4, 00:14:23.574 "bdev_name": "Malloc3" 00:14:23.574 } 00:14:23.574 ]' 00:14:23.574 09:52:16 -- ublk/ublk.sh@72 -- # seq 0 3 00:14:23.574 09:52:16 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.574 09:52:16 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:23.574 09:52:16 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:23.574 09:52:16 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:23.574 09:52:16 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:23.574 09:52:16 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:23.574 09:52:17 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:23.574 09:52:17 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:23.574 09:52:17 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:23.574 09:52:17 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.574 09:52:17 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:23.574 09:52:17 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:23.574 09:52:17 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:23.574 09:52:17 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:23.574 09:52:17 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:23.574 09:52:17 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:23.574 09:52:17 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:23.574 09:52:17 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:23.574 09:52:17 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:23.833 09:52:17 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:23.833 09:52:17 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.833 09:52:17 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:23.833 09:52:17 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:23.833 09:52:17 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:23.833 09:52:17 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:23.833 09:52:17 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:23.833 09:52:17 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:23.833 09:52:17 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:23.833 09:52:17 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:23.833 09:52:17 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:24.091 09:52:17 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:24.091 09:52:17 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.091 09:52:17 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:24.091 09:52:17 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:24.091 09:52:17 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:24.091 09:52:17 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:24.091 09:52:17 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:24.091 09:52:17 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:24.091 09:52:17 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:24.091 09:52:17 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:24.091 09:52:17 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:24.350 09:52:17 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:24.350 09:52:17 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:24.350 09:52:17 -- ublk/ublk.sh@85 -- # seq 0 3 00:14:24.350 09:52:17 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.350 09:52:17 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:24.350 09:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.350 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:14:24.350 [2024-06-10 09:52:17.895502] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:24.350 [2024-06-10 09:52:17.927806] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:24.350 [2024-06-10 09:52:17.933520] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:24.350 [2024-06-10 09:52:17.941198] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:24.350 [2024-06-10 09:52:17.941628] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:24.350 [2024-06-10 09:52:17.941660] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:24.350 09:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.350 09:52:17 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.350 09:52:17 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:24.350 09:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.350 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:14:24.351 [2024-06-10 09:52:17.953355] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:24.351 [2024-06-10 09:52:17.987264] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:24.351 [2024-06-10 09:52:17.988947] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:24.351 [2024-06-10 09:52:18.003256] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:24.351 [2024-06-10 09:52:18.003665] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:24.351 [2024-06-10 09:52:18.003697] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:24.351 09:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.351 09:52:18 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.351 09:52:18 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:24.351 09:52:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.351 09:52:18 -- common/autotest_common.sh@10 -- # set +x 00:14:24.351 [2024-06-10 09:52:18.019268] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:24.351 [2024-06-10 09:52:18.054190] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:24.351 [2024-06-10 09:52:18.059576] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:24.351 [2024-06-10 09:52:18.063732] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:24.351 [2024-06-10 09:52:18.064158] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:24.351 [2024-06-10 09:52:18.064197] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:24.351 09:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.351 09:52:18 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.351 09:52:18 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:24.351 09:52:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.351 09:52:18 -- common/autotest_common.sh@10 -- # set +x 00:14:24.351 [2024-06-10 09:52:18.078334] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:24.609 [2024-06-10 09:52:18.118202] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:24.609 [2024-06-10 09:52:18.123311] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:24.609 [2024-06-10 09:52:18.133171] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:24.609 [2024-06-10 09:52:18.133629] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:24.609 [2024-06-10 09:52:18.133659] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:24.609 09:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.609 09:52:18 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:24.868 [2024-06-10 09:52:18.381339] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:24.868 [2024-06-10 09:52:18.389146] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:24.868 [2024-06-10 09:52:18.389218] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:24.868 09:52:18 -- ublk/ublk.sh@93 -- # seq 0 3 00:14:24.868 09:52:18 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.868 09:52:18 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:24.868 09:52:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.868 09:52:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.126 09:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.126 09:52:18 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:25.126 09:52:18 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:25.126 09:52:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.126 09:52:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.384 09:52:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.384 09:52:19 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:25.384 09:52:19 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:25.384 09:52:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.384 09:52:19 -- common/autotest_common.sh@10 -- # set +x 00:14:25.643 09:52:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.643 09:52:19 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:25.643 09:52:19 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:25.643 09:52:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.643 09:52:19 -- common/autotest_common.sh@10 -- # set +x 00:14:26.209 09:52:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.209 09:52:19 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:26.209 09:52:19 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:26.209 09:52:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.209 09:52:19 -- common/autotest_common.sh@10 -- # set +x 00:14:26.209 09:52:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.209 09:52:19 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:26.209 09:52:19 -- lvol/common.sh@26 -- # jq length 00:14:26.209 09:52:19 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:26.209 09:52:19 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:26.209 09:52:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.209 09:52:19 -- common/autotest_common.sh@10 -- # set +x 00:14:26.209 09:52:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.209 09:52:19 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:26.209 09:52:19 -- lvol/common.sh@28 -- # jq length 00:14:26.209 ************************************ 00:14:26.209 END TEST test_create_multi_ublk 00:14:26.209 ************************************ 00:14:26.209 09:52:19 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:26.209 00:14:26.209 real 0m4.150s 00:14:26.209 user 0m1.325s 00:14:26.209 sys 0m0.155s 00:14:26.209 09:52:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.209 09:52:19 -- common/autotest_common.sh@10 -- # set +x 00:14:26.209 09:52:19 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:26.209 09:52:19 -- ublk/ublk.sh@147 -- # cleanup 00:14:26.209 09:52:19 -- ublk/ublk.sh@130 -- # killprocess 70509 00:14:26.209 09:52:19 -- common/autotest_common.sh@926 -- # '[' -z 70509 ']' 00:14:26.209 09:52:19 -- common/autotest_common.sh@930 -- # kill -0 70509 00:14:26.210 09:52:19 -- common/autotest_common.sh@931 -- # uname 00:14:26.210 09:52:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:26.210 09:52:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70509 00:14:26.210 killing process with pid 70509 00:14:26.210 09:52:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:26.210 09:52:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:26.210 09:52:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70509' 00:14:26.210 09:52:19 -- common/autotest_common.sh@945 -- # kill 70509 00:14:26.210 09:52:19 -- common/autotest_common.sh@950 -- # wait 70509 00:14:27.183 [2024-06-10 09:52:20.861767] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:27.183 [2024-06-10 09:52:20.861838] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:28.562 ************************************ 00:14:28.562 END TEST ublk 00:14:28.562 ************************************ 00:14:28.562 00:14:28.562 real 0m29.215s 00:14:28.562 user 0m44.537s 00:14:28.562 sys 0m8.316s 00:14:28.562 09:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.562 09:52:21 -- common/autotest_common.sh@10 -- # set +x 00:14:28.562 09:52:21 -- spdk/autotest.sh@260 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:28.562 09:52:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:28.562 09:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:28.562 09:52:21 -- common/autotest_common.sh@10 -- # set +x 00:14:28.562 ************************************ 00:14:28.562 START TEST ublk_recovery 00:14:28.562 ************************************ 00:14:28.562 09:52:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:28.562 * Looking for test storage... 00:14:28.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:28.562 09:52:22 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:28.562 09:52:22 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:28.562 09:52:22 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:28.562 09:52:22 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:28.562 09:52:22 -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:28.562 09:52:22 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:28.562 09:52:22 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:28.562 09:52:22 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:28.562 09:52:22 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:28.562 09:52:22 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:28.562 09:52:22 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=70907 00:14:28.562 09:52:22 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:28.562 09:52:22 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:28.562 09:52:22 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 70907 00:14:28.562 09:52:22 -- common/autotest_common.sh@819 -- # '[' -z 70907 ']' 00:14:28.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.562 09:52:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.562 09:52:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:28.562 09:52:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.562 09:52:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:28.562 09:52:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.562 [2024-06-10 09:52:22.169009] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:28.562 [2024-06-10 09:52:22.169177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70907 ] 00:14:28.823 [2024-06-10 09:52:22.332512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:28.823 [2024-06-10 09:52:22.517751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.823 [2024-06-10 09:52:22.518335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.823 [2024-06-10 09:52:22.518356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.201 09:52:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:30.201 09:52:23 -- common/autotest_common.sh@852 -- # return 0 00:14:30.201 09:52:23 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:30.201 09:52:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.201 09:52:23 -- common/autotest_common.sh@10 -- # set +x 00:14:30.201 [2024-06-10 09:52:23.879745] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:30.201 09:52:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.201 09:52:23 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:30.201 09:52:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.201 09:52:23 -- common/autotest_common.sh@10 -- # set +x 00:14:30.459 malloc0 00:14:30.459 09:52:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.459 09:52:24 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:30.459 09:52:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.459 09:52:24 -- common/autotest_common.sh@10 -- # set +x 00:14:30.459 [2024-06-10 09:52:24.020392] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:30.459 [2024-06-10 09:52:24.020525] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:30.459 [2024-06-10 09:52:24.020541] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:30.459 [2024-06-10 09:52:24.020554] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:30.459 [2024-06-10 09:52:24.029327] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:30.459 [2024-06-10 09:52:24.029371] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:30.459 [2024-06-10 09:52:24.036169] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:30.459 [2024-06-10 09:52:24.036368] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:30.459 [2024-06-10 09:52:24.051248] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:30.459 1 00:14:30.459 09:52:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.459 09:52:24 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:31.396 09:52:25 -- ublk/ublk_recovery.sh@31 -- # fio_proc=70955 00:14:31.396 09:52:25 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:31.396 09:52:25 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:31.654 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:31.654 fio-3.35 00:14:31.654 Starting 1 process 00:14:36.935 09:52:30 -- ublk/ublk_recovery.sh@36 -- # kill -9 70907 00:14:36.935 09:52:30 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:42.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.204 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 70907 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:42.204 09:52:35 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:42.204 09:52:35 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71059 00:14:42.204 09:52:35 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.204 09:52:35 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71059 00:14:42.204 09:52:35 -- common/autotest_common.sh@819 -- # '[' -z 71059 ']' 00:14:42.204 09:52:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.204 09:52:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:42.204 09:52:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.204 09:52:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:42.204 09:52:35 -- common/autotest_common.sh@10 -- # set +x 00:14:42.204 [2024-06-10 09:52:35.183079] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:42.204 [2024-06-10 09:52:35.183468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71059 ] 00:14:42.204 [2024-06-10 09:52:35.360851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:42.204 [2024-06-10 09:52:35.616806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:42.204 [2024-06-10 09:52:35.617432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.204 [2024-06-10 09:52:35.617448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.140 09:52:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:43.140 09:52:36 -- common/autotest_common.sh@852 -- # return 0 00:14:43.140 09:52:36 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:43.140 09:52:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.140 09:52:36 -- common/autotest_common.sh@10 -- # set +x 00:14:43.140 [2024-06-10 09:52:36.865691] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:43.140 09:52:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.140 09:52:36 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:43.140 09:52:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.140 09:52:36 -- common/autotest_common.sh@10 -- # set +x 00:14:43.399 malloc0 00:14:43.399 09:52:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.399 09:52:36 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:43.399 09:52:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.399 09:52:36 -- common/autotest_common.sh@10 -- # set +x 00:14:43.399 [2024-06-10 09:52:37.001404] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:43.399 [2024-06-10 09:52:37.001460] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:43.399 [2024-06-10 09:52:37.001473] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:43.399 [2024-06-10 09:52:37.003176] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:43.399 [2024-06-10 09:52:37.003205] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:43.399 [2024-06-10 09:52:37.003302] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:43.399 1 00:14:43.399 09:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.399 09:52:37 -- ublk/ublk_recovery.sh@52 -- # wait 70955 00:15:09.998 [2024-06-10 09:53:00.362228] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:09.998 [2024-06-10 09:53:00.369809] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:09.998 [2024-06-10 09:53:00.376371] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:09.998 [2024-06-10 09:53:00.376421] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:31.960 00:15:31.960 fio_test: (groupid=0, jobs=1): err= 0: pid=70958: Mon Jun 10 09:53:25 2024 00:15:31.960 read: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(2380MiB/60002msec) 00:15:31.960 slat (usec): min=2, max=641, avg= 6.24, stdev= 2.93 00:15:31.960 clat (usec): min=1169, max=30327k, avg=5649.82, stdev=280118.33 00:15:31.960 lat (usec): min=1185, max=30327k, avg=5656.06, stdev=280118.33 00:15:31.960 clat percentiles (usec): 00:15:31.960 | 1.00th=[ 2474], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:15:31.960 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:15:31.960 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3195], 95.00th=[ 4146], 00:15:31.960 | 99.00th=[ 6128], 99.50th=[ 6718], 99.90th=[ 8717], 99.95th=[ 9765], 00:15:31.960 | 99.99th=[13829] 00:15:31.960 bw ( KiB/s): min=36960, max=88328, per=100.00%, avg=81378.85, stdev=9301.74, samples=59 00:15:31.960 iops : min= 9240, max=22082, avg=20344.75, stdev=2325.44, samples=59 00:15:31.960 write: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(2377MiB/60002msec); 0 zone resets 00:15:31.960 slat (usec): min=2, max=1448, avg= 6.34, stdev= 3.80 00:15:31.960 clat (usec): min=1087, max=30327k, avg=6949.30, stdev=338831.20 00:15:31.960 lat (usec): min=1095, max=30327k, avg=6955.64, stdev=338831.20 00:15:31.960 clat percentiles (msec): 00:15:31.960 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:15:31.960 | 30.00th=[ 3], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:15:31.960 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 00:15:31.960 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 13], 00:15:31.960 | 99.99th=[17113] 00:15:31.960 bw ( KiB/s): min=36640, max=86920, per=100.00%, avg=81292.88, stdev=9263.95, samples=59 00:15:31.960 iops : min= 9160, max=21730, avg=20323.22, stdev=2315.99, samples=59 00:15:31.960 lat (msec) : 2=0.06%, 4=94.71%, 10=5.17%, 20=0.04%, >=2000=0.01% 00:15:31.960 cpu : usr=5.47%, sys=11.99%, ctx=38255, majf=0, minf=13 00:15:31.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:31.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:31.960 issued rwts: total=609270,608584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:31.960 00:15:31.960 Run status group 0 (all jobs): 00:15:31.960 READ: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=2380MiB (2496MB), run=60002-60002msec 00:15:31.960 WRITE: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=2377MiB (2493MB), run=60002-60002msec 00:15:31.960 00:15:31.960 Disk stats (read/write): 00:15:31.960 ublkb1: ios=606825/606078, merge=0/0, ticks=3380319/4100677, in_queue=7480996, util=99.94% 00:15:31.960 09:53:25 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:31.960 09:53:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.960 09:53:25 -- common/autotest_common.sh@10 -- # set +x 00:15:31.960 [2024-06-10 09:53:25.328560] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:31.960 [2024-06-10 09:53:25.379299] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:31.960 [2024-06-10 09:53:25.379693] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:31.960 [2024-06-10 09:53:25.386151] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:31.960 [2024-06-10 09:53:25.386285] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:31.960 [2024-06-10 09:53:25.386303] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:31.961 09:53:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.961 09:53:25 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:31.961 09:53:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.961 09:53:25 -- common/autotest_common.sh@10 -- # set +x 00:15:31.961 [2024-06-10 09:53:25.401253] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:31.961 [2024-06-10 09:53:25.408186] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:31.961 [2024-06-10 09:53:25.408248] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:31.961 09:53:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.961 09:53:25 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:31.961 09:53:25 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:31.961 09:53:25 -- ublk/ublk_recovery.sh@14 -- # killprocess 71059 00:15:31.961 09:53:25 -- common/autotest_common.sh@926 -- # '[' -z 71059 ']' 00:15:31.961 09:53:25 -- common/autotest_common.sh@930 -- # kill -0 71059 00:15:31.961 09:53:25 -- common/autotest_common.sh@931 -- # uname 00:15:31.961 09:53:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:31.961 09:53:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71059 00:15:31.961 09:53:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:31.961 killing process with pid 71059 00:15:31.961 09:53:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:31.961 09:53:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71059' 00:15:31.961 09:53:25 -- common/autotest_common.sh@945 -- # kill 71059 00:15:31.961 09:53:25 -- common/autotest_common.sh@950 -- # wait 71059 00:15:32.910 [2024-06-10 09:53:26.440282] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:32.910 [2024-06-10 09:53:26.440353] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:34.288 00:15:34.288 real 1m5.698s 00:15:34.288 user 1m52.638s 00:15:34.288 sys 0m18.809s 00:15:34.288 09:53:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.288 ************************************ 00:15:34.288 END TEST ublk_recovery 00:15:34.288 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:15:34.288 ************************************ 00:15:34.288 09:53:27 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@268 -- # timing_exit lib 00:15:34.288 09:53:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:34.288 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:15:34.288 09:53:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:34.288 09:53:27 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:34.288 09:53:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:34.288 09:53:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:34.288 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:15:34.288 ************************************ 00:15:34.288 START TEST ftl 00:15:34.288 ************************************ 00:15:34.288 09:53:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:34.288 * Looking for test storage... 00:15:34.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.288 09:53:27 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:34.288 09:53:27 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:34.288 09:53:27 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.288 09:53:27 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.288 09:53:27 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:34.288 09:53:27 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:34.288 09:53:27 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.288 09:53:27 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:34.288 09:53:27 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:34.288 09:53:27 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.288 09:53:27 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.288 09:53:27 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:34.288 09:53:27 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:34.288 09:53:27 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:34.288 09:53:27 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:34.288 09:53:27 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:34.288 09:53:27 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:34.288 09:53:27 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.288 09:53:27 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.288 09:53:27 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:34.288 09:53:27 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:34.288 09:53:27 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:34.288 09:53:27 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:34.288 09:53:27 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:34.288 09:53:27 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:34.288 09:53:27 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:34.288 09:53:27 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:34.288 09:53:27 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.288 09:53:27 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.288 09:53:27 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.288 09:53:27 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:34.288 09:53:27 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:34.288 09:53:27 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:34.288 09:53:27 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:34.288 09:53:27 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:34.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:34.858 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:34.858 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:34.858 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:34.858 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:34.858 09:53:28 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=71853 00:15:34.858 09:53:28 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:34.858 09:53:28 -- ftl/ftl.sh@38 -- # waitforlisten 71853 00:15:34.858 09:53:28 -- common/autotest_common.sh@819 -- # '[' -z 71853 ']' 00:15:34.858 09:53:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.858 09:53:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:34.858 09:53:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.858 09:53:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:34.858 09:53:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.858 [2024-06-10 09:53:28.519874] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:34.858 [2024-06-10 09:53:28.520320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71853 ] 00:15:35.117 [2024-06-10 09:53:28.693549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.375 [2024-06-10 09:53:28.884500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:35.375 [2024-06-10 09:53:28.884786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.942 09:53:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:35.942 09:53:29 -- common/autotest_common.sh@852 -- # return 0 00:15:35.942 09:53:29 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:35.942 09:53:29 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:36.880 09:53:30 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:36.880 09:53:30 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:37.468 09:53:31 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:37.468 09:53:31 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:37.468 09:53:31 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:37.732 09:53:31 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:15:37.732 09:53:31 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:37.732 09:53:31 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:15:37.732 09:53:31 -- ftl/ftl.sh@50 -- # break 00:15:37.732 09:53:31 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:15:37.732 09:53:31 -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:37.732 09:53:31 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:37.733 09:53:31 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:37.991 09:53:31 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:15:37.991 09:53:31 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:37.991 09:53:31 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:15:37.991 09:53:31 -- ftl/ftl.sh@63 -- # break 00:15:37.991 09:53:31 -- ftl/ftl.sh@66 -- # killprocess 71853 00:15:37.991 09:53:31 -- common/autotest_common.sh@926 -- # '[' -z 71853 ']' 00:15:37.991 09:53:31 -- common/autotest_common.sh@930 -- # kill -0 71853 00:15:37.991 09:53:31 -- common/autotest_common.sh@931 -- # uname 00:15:37.991 09:53:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:37.991 09:53:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71853 00:15:37.991 killing process with pid 71853 00:15:37.991 09:53:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:37.991 09:53:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:37.991 09:53:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71853' 00:15:37.991 09:53:31 -- common/autotest_common.sh@945 -- # kill 71853 00:15:37.991 09:53:31 -- common/autotest_common.sh@950 -- # wait 71853 00:15:40.528 09:53:33 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:15:40.528 09:53:33 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:15:40.528 09:53:33 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:40.528 09:53:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:40.528 09:53:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:40.528 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.528 ************************************ 00:15:40.528 START TEST ftl_fio_basic 00:15:40.528 ************************************ 00:15:40.528 09:53:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:40.528 * Looking for test storage... 00:15:40.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.528 09:53:33 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:40.528 09:53:33 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:40.528 09:53:33 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.528 09:53:33 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.528 09:53:33 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:40.528 09:53:33 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:40.528 09:53:33 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.528 09:53:33 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:40.528 09:53:33 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:40.528 09:53:33 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.528 09:53:33 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.528 09:53:33 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:40.528 09:53:33 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:40.528 09:53:33 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:40.528 09:53:33 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:40.528 09:53:33 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:40.528 09:53:33 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:40.528 09:53:33 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.528 09:53:33 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.528 09:53:33 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:40.528 09:53:33 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:40.528 09:53:33 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:40.528 09:53:33 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:40.528 09:53:33 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:40.528 09:53:33 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:40.528 09:53:33 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:40.528 09:53:33 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:40.528 09:53:33 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:40.528 09:53:33 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:40.528 09:53:33 -- ftl/fio.sh@11 -- # declare -A suite 00:15:40.528 09:53:33 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:40.528 09:53:33 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:40.528 09:53:33 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:40.528 09:53:33 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.528 09:53:33 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:15:40.528 09:53:33 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:15:40.528 09:53:33 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:40.528 09:53:33 -- ftl/fio.sh@26 -- # uuid= 00:15:40.528 09:53:33 -- ftl/fio.sh@27 -- # timeout=240 00:15:40.528 09:53:33 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:40.528 09:53:33 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:40.528 09:53:33 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:40.528 09:53:33 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:40.528 09:53:33 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:40.528 09:53:33 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:40.528 09:53:33 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:40.528 09:53:33 -- ftl/fio.sh@45 -- # svcpid=71982 00:15:40.528 09:53:33 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:40.528 09:53:33 -- ftl/fio.sh@46 -- # waitforlisten 71982 00:15:40.528 09:53:33 -- common/autotest_common.sh@819 -- # '[' -z 71982 ']' 00:15:40.528 09:53:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.528 09:53:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:40.528 09:53:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.528 09:53:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:40.528 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.528 [2024-06-10 09:53:34.009731] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:40.528 [2024-06-10 09:53:34.010124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71982 ] 00:15:40.528 [2024-06-10 09:53:34.180771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.787 [2024-06-10 09:53:34.344115] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:40.787 [2024-06-10 09:53:34.344755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.787 [2024-06-10 09:53:34.344895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.787 [2024-06-10 09:53:34.344905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.165 09:53:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:42.165 09:53:35 -- common/autotest_common.sh@852 -- # return 0 00:15:42.165 09:53:35 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:42.165 09:53:35 -- ftl/common.sh@54 -- # local name=nvme0 00:15:42.165 09:53:35 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:42.165 09:53:35 -- ftl/common.sh@56 -- # local size=103424 00:15:42.165 09:53:35 -- ftl/common.sh@59 -- # local base_bdev 00:15:42.165 09:53:35 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:42.424 09:53:35 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:42.424 09:53:35 -- ftl/common.sh@62 -- # local base_size 00:15:42.424 09:53:35 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:42.424 09:53:35 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:15:42.424 09:53:35 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:42.424 09:53:35 -- common/autotest_common.sh@1359 -- # local bs 00:15:42.424 09:53:35 -- common/autotest_common.sh@1360 -- # local nb 00:15:42.424 09:53:35 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:42.424 09:53:36 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:42.424 { 00:15:42.424 "name": "nvme0n1", 00:15:42.424 "aliases": [ 00:15:42.424 "d5c2b3e0-f727-4bce-ac32-990fa1b9f0da" 00:15:42.424 ], 00:15:42.424 "product_name": "NVMe disk", 00:15:42.424 "block_size": 4096, 00:15:42.424 "num_blocks": 1310720, 00:15:42.424 "uuid": "d5c2b3e0-f727-4bce-ac32-990fa1b9f0da", 00:15:42.424 "assigned_rate_limits": { 00:15:42.424 "rw_ios_per_sec": 0, 00:15:42.424 "rw_mbytes_per_sec": 0, 00:15:42.424 "r_mbytes_per_sec": 0, 00:15:42.424 "w_mbytes_per_sec": 0 00:15:42.424 }, 00:15:42.424 "claimed": false, 00:15:42.424 "zoned": false, 00:15:42.424 "supported_io_types": { 00:15:42.424 "read": true, 00:15:42.424 "write": true, 00:15:42.424 "unmap": true, 00:15:42.424 "write_zeroes": true, 00:15:42.424 "flush": true, 00:15:42.424 "reset": true, 00:15:42.424 "compare": true, 00:15:42.424 "compare_and_write": false, 00:15:42.424 "abort": true, 00:15:42.424 "nvme_admin": true, 00:15:42.424 "nvme_io": true 00:15:42.424 }, 00:15:42.424 "driver_specific": { 00:15:42.424 "nvme": [ 00:15:42.424 { 00:15:42.424 "pci_address": "0000:00:07.0", 00:15:42.424 "trid": { 00:15:42.424 "trtype": "PCIe", 00:15:42.424 "traddr": "0000:00:07.0" 00:15:42.424 }, 00:15:42.424 "ctrlr_data": { 00:15:42.424 "cntlid": 0, 00:15:42.424 "vendor_id": "0x1b36", 00:15:42.424 "model_number": "QEMU NVMe Ctrl", 00:15:42.424 "serial_number": "12341", 00:15:42.424 "firmware_revision": "8.0.0", 00:15:42.424 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:42.424 "oacs": { 00:15:42.424 "security": 0, 00:15:42.424 "format": 1, 00:15:42.424 "firmware": 0, 00:15:42.424 "ns_manage": 1 00:15:42.424 }, 00:15:42.424 "multi_ctrlr": false, 00:15:42.424 "ana_reporting": false 00:15:42.424 }, 00:15:42.424 "vs": { 00:15:42.424 "nvme_version": "1.4" 00:15:42.424 }, 00:15:42.424 "ns_data": { 00:15:42.424 "id": 1, 00:15:42.424 "can_share": false 00:15:42.424 } 00:15:42.424 } 00:15:42.424 ], 00:15:42.424 "mp_policy": "active_passive" 00:15:42.424 } 00:15:42.424 } 00:15:42.424 ]' 00:15:42.424 09:53:36 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:42.683 09:53:36 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:42.683 09:53:36 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:42.683 09:53:36 -- common/autotest_common.sh@1363 -- # nb=1310720 00:15:42.683 09:53:36 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:15:42.683 09:53:36 -- common/autotest_common.sh@1367 -- # echo 5120 00:15:42.683 09:53:36 -- ftl/common.sh@63 -- # base_size=5120 00:15:42.683 09:53:36 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:42.683 09:53:36 -- ftl/common.sh@67 -- # clear_lvols 00:15:42.683 09:53:36 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:42.683 09:53:36 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:42.942 09:53:36 -- ftl/common.sh@28 -- # stores= 00:15:42.942 09:53:36 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:43.200 09:53:36 -- ftl/common.sh@68 -- # lvs=85d98bb9-5ffc-4133-b7e8-23a756ab554a 00:15:43.200 09:53:36 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 85d98bb9-5ffc-4133-b7e8-23a756ab554a 00:15:43.459 09:53:37 -- ftl/fio.sh@48 -- # split_bdev=35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:43.459 09:53:37 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:43.459 09:53:37 -- ftl/common.sh@35 -- # local name=nvc0 00:15:43.459 09:53:37 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:43.459 09:53:37 -- ftl/common.sh@37 -- # local base_bdev=35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:43.459 09:53:37 -- ftl/common.sh@38 -- # local cache_size= 00:15:43.459 09:53:37 -- ftl/common.sh@41 -- # get_bdev_size 35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:43.459 09:53:37 -- common/autotest_common.sh@1357 -- # local bdev_name=35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:43.459 09:53:37 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:43.459 09:53:37 -- common/autotest_common.sh@1359 -- # local bs 00:15:43.459 09:53:37 -- common/autotest_common.sh@1360 -- # local nb 00:15:43.459 09:53:37 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:43.459 09:53:37 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:43.459 { 00:15:43.459 "name": "35691f0c-5644-4cfb-a2df-2dfa2e6d393e", 00:15:43.459 "aliases": [ 00:15:43.459 "lvs/nvme0n1p0" 00:15:43.459 ], 00:15:43.459 "product_name": "Logical Volume", 00:15:43.459 "block_size": 4096, 00:15:43.459 "num_blocks": 26476544, 00:15:43.459 "uuid": "35691f0c-5644-4cfb-a2df-2dfa2e6d393e", 00:15:43.459 "assigned_rate_limits": { 00:15:43.459 "rw_ios_per_sec": 0, 00:15:43.459 "rw_mbytes_per_sec": 0, 00:15:43.459 "r_mbytes_per_sec": 0, 00:15:43.459 "w_mbytes_per_sec": 0 00:15:43.459 }, 00:15:43.459 "claimed": false, 00:15:43.459 "zoned": false, 00:15:43.459 "supported_io_types": { 00:15:43.459 "read": true, 00:15:43.459 "write": true, 00:15:43.459 "unmap": true, 00:15:43.459 "write_zeroes": true, 00:15:43.459 "flush": false, 00:15:43.459 "reset": true, 00:15:43.459 "compare": false, 00:15:43.459 "compare_and_write": false, 00:15:43.459 "abort": false, 00:15:43.459 "nvme_admin": false, 00:15:43.459 "nvme_io": false 00:15:43.459 }, 00:15:43.459 "driver_specific": { 00:15:43.459 "lvol": { 00:15:43.459 "lvol_store_uuid": "85d98bb9-5ffc-4133-b7e8-23a756ab554a", 00:15:43.459 "base_bdev": "nvme0n1", 00:15:43.459 "thin_provision": true, 00:15:43.459 "snapshot": false, 00:15:43.459 "clone": false, 00:15:43.459 "esnap_clone": false 00:15:43.459 } 00:15:43.459 } 00:15:43.459 } 00:15:43.459 ]' 00:15:43.459 09:53:37 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:43.718 09:53:37 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:43.718 09:53:37 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:43.718 09:53:37 -- common/autotest_common.sh@1363 -- # nb=26476544 00:15:43.718 09:53:37 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:15:43.718 09:53:37 -- common/autotest_common.sh@1367 -- # echo 103424 00:15:43.718 09:53:37 -- ftl/common.sh@41 -- # local base_size=5171 00:15:43.718 09:53:37 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:43.718 09:53:37 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:43.976 09:53:37 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:43.976 09:53:37 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:43.976 09:53:37 -- ftl/common.sh@48 -- # get_bdev_size 35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:43.976 09:53:37 -- common/autotest_common.sh@1357 -- # local bdev_name=35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:43.976 09:53:37 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:43.976 09:53:37 -- common/autotest_common.sh@1359 -- # local bs 00:15:43.976 09:53:37 -- common/autotest_common.sh@1360 -- # local nb 00:15:43.976 09:53:37 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:44.235 09:53:37 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:44.235 { 00:15:44.235 "name": "35691f0c-5644-4cfb-a2df-2dfa2e6d393e", 00:15:44.235 "aliases": [ 00:15:44.235 "lvs/nvme0n1p0" 00:15:44.235 ], 00:15:44.235 "product_name": "Logical Volume", 00:15:44.235 "block_size": 4096, 00:15:44.235 "num_blocks": 26476544, 00:15:44.235 "uuid": "35691f0c-5644-4cfb-a2df-2dfa2e6d393e", 00:15:44.235 "assigned_rate_limits": { 00:15:44.235 "rw_ios_per_sec": 0, 00:15:44.235 "rw_mbytes_per_sec": 0, 00:15:44.235 "r_mbytes_per_sec": 0, 00:15:44.235 "w_mbytes_per_sec": 0 00:15:44.235 }, 00:15:44.235 "claimed": false, 00:15:44.235 "zoned": false, 00:15:44.235 "supported_io_types": { 00:15:44.235 "read": true, 00:15:44.235 "write": true, 00:15:44.235 "unmap": true, 00:15:44.235 "write_zeroes": true, 00:15:44.235 "flush": false, 00:15:44.235 "reset": true, 00:15:44.235 "compare": false, 00:15:44.235 "compare_and_write": false, 00:15:44.235 "abort": false, 00:15:44.235 "nvme_admin": false, 00:15:44.235 "nvme_io": false 00:15:44.235 }, 00:15:44.235 "driver_specific": { 00:15:44.235 "lvol": { 00:15:44.235 "lvol_store_uuid": "85d98bb9-5ffc-4133-b7e8-23a756ab554a", 00:15:44.235 "base_bdev": "nvme0n1", 00:15:44.235 "thin_provision": true, 00:15:44.235 "snapshot": false, 00:15:44.235 "clone": false, 00:15:44.235 "esnap_clone": false 00:15:44.235 } 00:15:44.235 } 00:15:44.235 } 00:15:44.235 ]' 00:15:44.235 09:53:37 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:44.235 09:53:37 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:44.235 09:53:37 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:44.235 09:53:37 -- common/autotest_common.sh@1363 -- # nb=26476544 00:15:44.235 09:53:37 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:15:44.235 09:53:37 -- common/autotest_common.sh@1367 -- # echo 103424 00:15:44.235 09:53:37 -- ftl/common.sh@48 -- # cache_size=5171 00:15:44.235 09:53:37 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:44.495 09:53:38 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:44.495 09:53:38 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:44.495 09:53:38 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:44.495 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:44.495 09:53:38 -- ftl/fio.sh@56 -- # get_bdev_size 35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:44.495 09:53:38 -- common/autotest_common.sh@1357 -- # local bdev_name=35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:44.495 09:53:38 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:44.495 09:53:38 -- common/autotest_common.sh@1359 -- # local bs 00:15:44.495 09:53:38 -- common/autotest_common.sh@1360 -- # local nb 00:15:44.495 09:53:38 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35691f0c-5644-4cfb-a2df-2dfa2e6d393e 00:15:44.754 09:53:38 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:44.754 { 00:15:44.754 "name": "35691f0c-5644-4cfb-a2df-2dfa2e6d393e", 00:15:44.754 "aliases": [ 00:15:44.754 "lvs/nvme0n1p0" 00:15:44.754 ], 00:15:44.754 "product_name": "Logical Volume", 00:15:44.754 "block_size": 4096, 00:15:44.754 "num_blocks": 26476544, 00:15:44.754 "uuid": "35691f0c-5644-4cfb-a2df-2dfa2e6d393e", 00:15:44.754 "assigned_rate_limits": { 00:15:44.754 "rw_ios_per_sec": 0, 00:15:44.754 "rw_mbytes_per_sec": 0, 00:15:44.754 "r_mbytes_per_sec": 0, 00:15:44.754 "w_mbytes_per_sec": 0 00:15:44.754 }, 00:15:44.754 "claimed": false, 00:15:44.754 "zoned": false, 00:15:44.754 "supported_io_types": { 00:15:44.754 "read": true, 00:15:44.754 "write": true, 00:15:44.754 "unmap": true, 00:15:44.754 "write_zeroes": true, 00:15:44.754 "flush": false, 00:15:44.754 "reset": true, 00:15:44.754 "compare": false, 00:15:44.754 "compare_and_write": false, 00:15:44.754 "abort": false, 00:15:44.754 "nvme_admin": false, 00:15:44.754 "nvme_io": false 00:15:44.754 }, 00:15:44.754 "driver_specific": { 00:15:44.754 "lvol": { 00:15:44.754 "lvol_store_uuid": "85d98bb9-5ffc-4133-b7e8-23a756ab554a", 00:15:44.754 "base_bdev": "nvme0n1", 00:15:44.754 "thin_provision": true, 00:15:44.754 "snapshot": false, 00:15:44.754 "clone": false, 00:15:44.754 "esnap_clone": false 00:15:44.754 } 00:15:44.754 } 00:15:44.754 } 00:15:44.754 ]' 00:15:44.754 09:53:38 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:44.754 09:53:38 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:44.754 09:53:38 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:45.013 09:53:38 -- common/autotest_common.sh@1363 -- # nb=26476544 00:15:45.013 09:53:38 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:15:45.013 09:53:38 -- common/autotest_common.sh@1367 -- # echo 103424 00:15:45.013 09:53:38 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:45.013 09:53:38 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:45.013 09:53:38 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 35691f0c-5644-4cfb-a2df-2dfa2e6d393e -c nvc0n1p0 --l2p_dram_limit 60 00:15:45.273 [2024-06-10 09:53:38.785753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.786261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:45.273 [2024-06-10 09:53:38.786312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:15:45.273 [2024-06-10 09:53:38.786329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.786465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.786517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:45.273 [2024-06-10 09:53:38.786533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:15:45.273 [2024-06-10 09:53:38.786544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.786631] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:45.273 [2024-06-10 09:53:38.787730] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:45.273 [2024-06-10 09:53:38.787768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.787782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:45.273 [2024-06-10 09:53:38.787796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:15:45.273 [2024-06-10 09:53:38.787807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.787945] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7ed123ec-e07f-436f-b73b-994aed32d7c1 00:15:45.273 [2024-06-10 09:53:38.789020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.789055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:45.273 [2024-06-10 09:53:38.789070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:45.273 [2024-06-10 09:53:38.789083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.793420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.793478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:45.273 [2024-06-10 09:53:38.793495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.253 ms 00:15:45.273 [2024-06-10 09:53:38.793510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.793637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.793659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:45.273 [2024-06-10 09:53:38.793672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:15:45.273 [2024-06-10 09:53:38.793687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.793783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.793804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:45.273 [2024-06-10 09:53:38.793817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:45.273 [2024-06-10 09:53:38.793835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.793887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:45.273 [2024-06-10 09:53:38.798495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.798540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:45.273 [2024-06-10 09:53:38.798575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.624 ms 00:15:45.273 [2024-06-10 09:53:38.798590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.798645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.798661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:45.273 [2024-06-10 09:53:38.798676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:45.273 [2024-06-10 09:53:38.798687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.798771] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:45.273 [2024-06-10 09:53:38.798948] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:45.273 [2024-06-10 09:53:38.798987] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:45.273 [2024-06-10 09:53:38.799004] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:45.273 [2024-06-10 09:53:38.799024] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:45.273 [2024-06-10 09:53:38.799039] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:45.273 [2024-06-10 09:53:38.799060] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:45.273 [2024-06-10 09:53:38.799072] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:45.273 [2024-06-10 09:53:38.799087] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:45.273 [2024-06-10 09:53:38.799098] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:45.273 [2024-06-10 09:53:38.799129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.799144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:45.273 [2024-06-10 09:53:38.799159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:15:45.273 [2024-06-10 09:53:38.799170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.799270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.273 [2024-06-10 09:53:38.799287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:45.273 [2024-06-10 09:53:38.799303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:15:45.273 [2024-06-10 09:53:38.799314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.273 [2024-06-10 09:53:38.799421] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:45.273 [2024-06-10 09:53:38.799437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:45.273 [2024-06-10 09:53:38.799454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:45.273 [2024-06-10 09:53:38.799466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.273 [2024-06-10 09:53:38.799479] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:45.273 [2024-06-10 09:53:38.799490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:45.273 [2024-06-10 09:53:38.799503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:45.273 [2024-06-10 09:53:38.799514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:45.273 [2024-06-10 09:53:38.799527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:45.273 [2024-06-10 09:53:38.799537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:45.273 [2024-06-10 09:53:38.799549] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:45.273 [2024-06-10 09:53:38.799559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:45.273 [2024-06-10 09:53:38.799573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:45.273 [2024-06-10 09:53:38.799584] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:45.273 [2024-06-10 09:53:38.799597] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:45.273 [2024-06-10 09:53:38.799607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.273 [2024-06-10 09:53:38.799621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:45.273 [2024-06-10 09:53:38.799631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:45.273 [2024-06-10 09:53:38.799643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.273 [2024-06-10 09:53:38.799654] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:45.273 [2024-06-10 09:53:38.799668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:45.273 [2024-06-10 09:53:38.799679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:45.273 [2024-06-10 09:53:38.799691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:45.273 [2024-06-10 09:53:38.799702] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:45.273 [2024-06-10 09:53:38.799714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:45.273 [2024-06-10 09:53:38.799724] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:45.273 [2024-06-10 09:53:38.799736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:45.273 [2024-06-10 09:53:38.799746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:45.273 [2024-06-10 09:53:38.799758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:45.274 [2024-06-10 09:53:38.799769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:45.274 [2024-06-10 09:53:38.799781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:45.274 [2024-06-10 09:53:38.799791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:45.274 [2024-06-10 09:53:38.799805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:45.274 [2024-06-10 09:53:38.799815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:45.274 [2024-06-10 09:53:38.799827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:45.274 [2024-06-10 09:53:38.799839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:45.274 [2024-06-10 09:53:38.799851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:45.274 [2024-06-10 09:53:38.799861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:45.274 [2024-06-10 09:53:38.799897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:45.274 [2024-06-10 09:53:38.799909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:45.274 [2024-06-10 09:53:38.799921] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:45.274 [2024-06-10 09:53:38.799933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:45.274 [2024-06-10 09:53:38.799945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:45.274 [2024-06-10 09:53:38.799956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.274 [2024-06-10 09:53:38.799969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:45.274 [2024-06-10 09:53:38.799980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:45.274 [2024-06-10 09:53:38.799992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:45.274 [2024-06-10 09:53:38.800003] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:45.274 [2024-06-10 09:53:38.800017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:45.274 [2024-06-10 09:53:38.800028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:45.274 [2024-06-10 09:53:38.800041] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:45.274 [2024-06-10 09:53:38.800056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:45.274 [2024-06-10 09:53:38.800073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:45.274 [2024-06-10 09:53:38.800085] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:45.274 [2024-06-10 09:53:38.800098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:45.274 [2024-06-10 09:53:38.800123] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:45.274 [2024-06-10 09:53:38.800138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:45.274 [2024-06-10 09:53:38.800150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:45.274 [2024-06-10 09:53:38.800163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:45.274 [2024-06-10 09:53:38.800174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:45.274 [2024-06-10 09:53:38.800187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:45.274 [2024-06-10 09:53:38.800199] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:45.274 [2024-06-10 09:53:38.800212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:45.274 [2024-06-10 09:53:38.800224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:45.274 [2024-06-10 09:53:38.800241] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:45.274 [2024-06-10 09:53:38.800253] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:45.274 [2024-06-10 09:53:38.800267] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:45.274 [2024-06-10 09:53:38.800281] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:45.274 [2024-06-10 09:53:38.800295] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:45.274 [2024-06-10 09:53:38.800306] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:45.274 [2024-06-10 09:53:38.800319] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:45.274 [2024-06-10 09:53:38.800333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.800347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:45.274 [2024-06-10 09:53:38.800362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:15:45.274 [2024-06-10 09:53:38.800375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.817825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.817892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:45.274 [2024-06-10 09:53:38.817911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.368 ms 00:15:45.274 [2024-06-10 09:53:38.817924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.818048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.818069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:45.274 [2024-06-10 09:53:38.818081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:15:45.274 [2024-06-10 09:53:38.818094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.856684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.856744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:45.274 [2024-06-10 09:53:38.856767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.482 ms 00:15:45.274 [2024-06-10 09:53:38.856781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.856854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.856874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:45.274 [2024-06-10 09:53:38.856889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:45.274 [2024-06-10 09:53:38.856902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.857291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.857315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:45.274 [2024-06-10 09:53:38.857329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:15:45.274 [2024-06-10 09:53:38.857348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.857501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.857526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:45.274 [2024-06-10 09:53:38.857539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:15:45.274 [2024-06-10 09:53:38.857552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.888564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.888630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:45.274 [2024-06-10 09:53:38.888649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.978 ms 00:15:45.274 [2024-06-10 09:53:38.888663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.901910] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:45.274 [2024-06-10 09:53:38.915562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.915652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:45.274 [2024-06-10 09:53:38.915690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.730 ms 00:15:45.274 [2024-06-10 09:53:38.915835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.974722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.274 [2024-06-10 09:53:38.974802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:45.274 [2024-06-10 09:53:38.974825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.818 ms 00:15:45.274 [2024-06-10 09:53:38.974840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.274 [2024-06-10 09:53:38.974911] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:45.274 [2024-06-10 09:53:38.974949] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:48.556 [2024-06-10 09:53:41.850992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:41.851075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:48.556 [2024-06-10 09:53:41.851097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2876.097 ms 00:15:48.556 [2024-06-10 09:53:41.851109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:41.851414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:41.851435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:48.556 [2024-06-10 09:53:41.851452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:15:48.556 [2024-06-10 09:53:41.851463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:41.881090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:41.881149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:48.556 [2024-06-10 09:53:41.881168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.538 ms 00:15:48.556 [2024-06-10 09:53:41.881179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:41.910248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:41.910283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:48.556 [2024-06-10 09:53:41.910304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.034 ms 00:15:48.556 [2024-06-10 09:53:41.910315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:41.910727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:41.910747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:48.556 [2024-06-10 09:53:41.910763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:15:48.556 [2024-06-10 09:53:41.910777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:41.986147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:41.986209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:48.556 [2024-06-10 09:53:41.986229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.301 ms 00:15:48.556 [2024-06-10 09:53:41.986240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:42.017760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:42.017811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:48.556 [2024-06-10 09:53:42.017829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.461 ms 00:15:48.556 [2024-06-10 09:53:42.017840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:42.021754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:42.021788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:48.556 [2024-06-10 09:53:42.021808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.855 ms 00:15:48.556 [2024-06-10 09:53:42.021820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:42.054746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:42.054793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:48.556 [2024-06-10 09:53:42.054813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.846 ms 00:15:48.556 [2024-06-10 09:53:42.054825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:42.054913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:42.054935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:48.556 [2024-06-10 09:53:42.054951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:48.556 [2024-06-10 09:53:42.054962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:42.055132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.556 [2024-06-10 09:53:42.055154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:48.556 [2024-06-10 09:53:42.055171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:48.556 [2024-06-10 09:53:42.055183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.556 [2024-06-10 09:53:42.056358] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3270.102 ms, result 0 00:15:48.556 { 00:15:48.556 "name": "ftl0", 00:15:48.556 "uuid": "7ed123ec-e07f-436f-b73b-994aed32d7c1" 00:15:48.556 } 00:15:48.556 09:53:42 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:48.556 09:53:42 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:15:48.556 09:53:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:48.556 09:53:42 -- common/autotest_common.sh@889 -- # local i 00:15:48.556 09:53:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:48.556 09:53:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:48.556 09:53:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:48.556 09:53:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:48.815 [ 00:15:48.815 { 00:15:48.815 "name": "ftl0", 00:15:48.815 "aliases": [ 00:15:48.815 "7ed123ec-e07f-436f-b73b-994aed32d7c1" 00:15:48.815 ], 00:15:48.815 "product_name": "FTL disk", 00:15:48.815 "block_size": 4096, 00:15:48.815 "num_blocks": 20971520, 00:15:48.815 "uuid": "7ed123ec-e07f-436f-b73b-994aed32d7c1", 00:15:48.815 "assigned_rate_limits": { 00:15:48.815 "rw_ios_per_sec": 0, 00:15:48.815 "rw_mbytes_per_sec": 0, 00:15:48.815 "r_mbytes_per_sec": 0, 00:15:48.815 "w_mbytes_per_sec": 0 00:15:48.815 }, 00:15:48.815 "claimed": false, 00:15:48.815 "zoned": false, 00:15:48.815 "supported_io_types": { 00:15:48.815 "read": true, 00:15:48.815 "write": true, 00:15:48.815 "unmap": true, 00:15:48.815 "write_zeroes": true, 00:15:48.815 "flush": true, 00:15:48.815 "reset": false, 00:15:48.815 "compare": false, 00:15:48.815 "compare_and_write": false, 00:15:48.815 "abort": false, 00:15:48.815 "nvme_admin": false, 00:15:48.815 "nvme_io": false 00:15:48.815 }, 00:15:48.815 "driver_specific": { 00:15:48.815 "ftl": { 00:15:48.815 "base_bdev": "35691f0c-5644-4cfb-a2df-2dfa2e6d393e", 00:15:48.815 "cache": "nvc0n1p0" 00:15:48.815 } 00:15:48.815 } 00:15:48.815 } 00:15:48.815 ] 00:15:48.815 09:53:42 -- common/autotest_common.sh@895 -- # return 0 00:15:48.815 09:53:42 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:48.815 09:53:42 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:49.074 09:53:42 -- ftl/fio.sh@70 -- # echo ']}' 00:15:49.074 09:53:42 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:49.333 [2024-06-10 09:53:42.961405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:42.961466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:49.333 [2024-06-10 09:53:42.961486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:49.333 [2024-06-10 09:53:42.961499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.333 [2024-06-10 09:53:42.961540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:49.333 [2024-06-10 09:53:42.964962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:42.964991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:49.333 [2024-06-10 09:53:42.965007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.396 ms 00:15:49.333 [2024-06-10 09:53:42.965018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.333 [2024-06-10 09:53:42.965587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:42.965614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:49.333 [2024-06-10 09:53:42.965630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:15:49.333 [2024-06-10 09:53:42.965655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.333 [2024-06-10 09:53:42.968968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:42.968995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:49.333 [2024-06-10 09:53:42.969026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:15:49.333 [2024-06-10 09:53:42.969038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.333 [2024-06-10 09:53:42.975836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:42.975869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:49.333 [2024-06-10 09:53:42.975889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.742 ms 00:15:49.333 [2024-06-10 09:53:42.975900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.333 [2024-06-10 09:53:43.005789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:43.005842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:49.333 [2024-06-10 09:53:43.005860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.764 ms 00:15:49.333 [2024-06-10 09:53:43.005871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.333 [2024-06-10 09:53:43.024454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:43.024508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:49.333 [2024-06-10 09:53:43.024528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.519 ms 00:15:49.333 [2024-06-10 09:53:43.024540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.333 [2024-06-10 09:53:43.024763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:43.024784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:49.333 [2024-06-10 09:53:43.024799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:15:49.333 [2024-06-10 09:53:43.024830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.333 [2024-06-10 09:53:43.055813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:43.055861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:49.333 [2024-06-10 09:53:43.055879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.940 ms 00:15:49.333 [2024-06-10 09:53:43.055889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.333 [2024-06-10 09:53:43.086126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.333 [2024-06-10 09:53:43.086170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:49.333 [2024-06-10 09:53:43.086190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.180 ms 00:15:49.333 [2024-06-10 09:53:43.086201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.593 [2024-06-10 09:53:43.119166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.593 [2024-06-10 09:53:43.119240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:49.593 [2024-06-10 09:53:43.119291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.902 ms 00:15:49.593 [2024-06-10 09:53:43.119304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.593 [2024-06-10 09:53:43.150688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.593 [2024-06-10 09:53:43.150741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:49.593 [2024-06-10 09:53:43.150759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.230 ms 00:15:49.593 [2024-06-10 09:53:43.150770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.593 [2024-06-10 09:53:43.150825] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:49.593 [2024-06-10 09:53:43.150858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.150997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:49.593 [2024-06-10 09:53:43.151406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.151989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:49.594 [2024-06-10 09:53:43.152397] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:49.594 [2024-06-10 09:53:43.152410] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ed123ec-e07f-436f-b73b-994aed32d7c1 00:15:49.594 [2024-06-10 09:53:43.152421] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:49.594 [2024-06-10 09:53:43.152433] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:49.594 [2024-06-10 09:53:43.152443] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:49.594 [2024-06-10 09:53:43.152456] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:49.594 [2024-06-10 09:53:43.152466] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:49.594 [2024-06-10 09:53:43.152478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:49.594 [2024-06-10 09:53:43.152488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:49.594 [2024-06-10 09:53:43.152499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:49.594 [2024-06-10 09:53:43.152509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:49.594 [2024-06-10 09:53:43.152524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.594 [2024-06-10 09:53:43.152535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:49.594 [2024-06-10 09:53:43.152549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:15:49.594 [2024-06-10 09:53:43.152562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.594 [2024-06-10 09:53:43.168549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.594 [2024-06-10 09:53:43.168587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:49.594 [2024-06-10 09:53:43.168606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.914 ms 00:15:49.594 [2024-06-10 09:53:43.168617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.594 [2024-06-10 09:53:43.168838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.594 [2024-06-10 09:53:43.168859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:49.594 [2024-06-10 09:53:43.168891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:15:49.594 [2024-06-10 09:53:43.168918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.594 [2024-06-10 09:53:43.226216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.594 [2024-06-10 09:53:43.226311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:49.594 [2024-06-10 09:53:43.226333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.594 [2024-06-10 09:53:43.226345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.594 [2024-06-10 09:53:43.226437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.594 [2024-06-10 09:53:43.226452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:49.594 [2024-06-10 09:53:43.226470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.594 [2024-06-10 09:53:43.226482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.595 [2024-06-10 09:53:43.226619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.595 [2024-06-10 09:53:43.226639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:49.595 [2024-06-10 09:53:43.226654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.595 [2024-06-10 09:53:43.226665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.595 [2024-06-10 09:53:43.226706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.595 [2024-06-10 09:53:43.226719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:49.595 [2024-06-10 09:53:43.226734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.595 [2024-06-10 09:53:43.226747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.595 [2024-06-10 09:53:43.341035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.595 [2024-06-10 09:53:43.341094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:49.595 [2024-06-10 09:53:43.341137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.595 [2024-06-10 09:53:43.341152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.853 [2024-06-10 09:53:43.380893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.853 [2024-06-10 09:53:43.380953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:49.853 [2024-06-10 09:53:43.380994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.853 [2024-06-10 09:53:43.381005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.853 [2024-06-10 09:53:43.381124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.853 [2024-06-10 09:53:43.381180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:49.853 [2024-06-10 09:53:43.381197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.853 [2024-06-10 09:53:43.381209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.853 [2024-06-10 09:53:43.381305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.853 [2024-06-10 09:53:43.381321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:49.853 [2024-06-10 09:53:43.381336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.854 [2024-06-10 09:53:43.381347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.854 [2024-06-10 09:53:43.381502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.854 [2024-06-10 09:53:43.381521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:49.854 [2024-06-10 09:53:43.381537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.854 [2024-06-10 09:53:43.381549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.854 [2024-06-10 09:53:43.381622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.854 [2024-06-10 09:53:43.381646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:49.854 [2024-06-10 09:53:43.381662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.854 [2024-06-10 09:53:43.381674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.854 [2024-06-10 09:53:43.381734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.854 [2024-06-10 09:53:43.381749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:49.854 [2024-06-10 09:53:43.381763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.854 [2024-06-10 09:53:43.381775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.854 [2024-06-10 09:53:43.381840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.854 [2024-06-10 09:53:43.381857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:49.854 [2024-06-10 09:53:43.381873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.854 [2024-06-10 09:53:43.381885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.854 [2024-06-10 09:53:43.382078] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.635 ms, result 0 00:15:49.854 true 00:15:49.854 09:53:43 -- ftl/fio.sh@75 -- # killprocess 71982 00:15:49.854 09:53:43 -- common/autotest_common.sh@926 -- # '[' -z 71982 ']' 00:15:49.854 09:53:43 -- common/autotest_common.sh@930 -- # kill -0 71982 00:15:49.854 09:53:43 -- common/autotest_common.sh@931 -- # uname 00:15:49.854 09:53:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:49.854 09:53:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71982 00:15:49.854 killing process with pid 71982 00:15:49.854 09:53:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:49.854 09:53:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:49.854 09:53:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71982' 00:15:49.854 09:53:43 -- common/autotest_common.sh@945 -- # kill 71982 00:15:49.854 09:53:43 -- common/autotest_common.sh@950 -- # wait 71982 00:15:54.046 09:53:47 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:54.046 09:53:47 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:54.046 09:53:47 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:54.046 09:53:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:54.046 09:53:47 -- common/autotest_common.sh@10 -- # set +x 00:15:54.046 09:53:47 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:54.046 09:53:47 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:54.046 09:53:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:54.046 09:53:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:54.046 09:53:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:54.046 09:53:47 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.046 09:53:47 -- common/autotest_common.sh@1320 -- # shift 00:15:54.046 09:53:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:54.046 09:53:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:54.046 09:53:47 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.046 09:53:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:54.046 09:53:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:54.046 09:53:47 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:54.046 09:53:47 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:54.046 09:53:47 -- common/autotest_common.sh@1326 -- # break 00:15:54.046 09:53:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:54.046 09:53:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:54.306 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:54.306 fio-3.35 00:15:54.306 Starting 1 thread 00:15:59.575 00:15:59.575 test: (groupid=0, jobs=1): err= 0: pid=72215: Mon Jun 10 09:53:52 2024 00:15:59.575 read: IOPS=953, BW=63.3MiB/s (66.4MB/s)(255MiB/4021msec) 00:15:59.575 slat (nsec): min=5373, max=43336, avg=7499.14, stdev=3157.59 00:15:59.575 clat (usec): min=319, max=753, avg=467.78, stdev=56.22 00:15:59.575 lat (usec): min=325, max=773, avg=475.28, stdev=56.87 00:15:59.575 clat percentiles (usec): 00:15:59.575 | 1.00th=[ 355], 5.00th=[ 375], 10.00th=[ 400], 20.00th=[ 429], 00:15:59.575 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 474], 00:15:59.575 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 570], 00:15:59.575 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 725], 99.95th=[ 742], 00:15:59.575 | 99.99th=[ 750] 00:15:59.575 write: IOPS=960, BW=63.8MiB/s (66.9MB/s)(256MiB/4016msec); 0 zone resets 00:15:59.575 slat (usec): min=18, max=120, avg=24.96, stdev= 6.22 00:15:59.575 clat (usec): min=331, max=1014, avg=530.81, stdev=71.09 00:15:59.576 lat (usec): min=354, max=1041, avg=555.76, stdev=71.76 00:15:59.576 clat percentiles (usec): 00:15:59.576 | 1.00th=[ 392], 5.00th=[ 437], 10.00th=[ 457], 20.00th=[ 474], 00:15:59.576 | 30.00th=[ 494], 40.00th=[ 510], 50.00th=[ 529], 60.00th=[ 545], 00:15:59.576 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 635], 00:15:59.576 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 947], 99.95th=[ 979], 00:15:59.576 | 99.99th=[ 1012] 00:15:59.576 bw ( KiB/s): min=61213, max=66912, per=100.00%, avg=65383.62, stdev=1814.20, samples=8 00:15:59.576 iops : min= 900, max= 984, avg=961.50, stdev=26.74, samples=8 00:15:59.576 lat (usec) : 500=53.75%, 750=45.43%, 1000=0.81% 00:15:59.576 lat (msec) : 2=0.01% 00:15:59.576 cpu : usr=99.20%, sys=0.15%, ctx=7, majf=0, minf=1318 00:15:59.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.576 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.576 00:15:59.576 Run status group 0 (all jobs): 00:15:59.576 READ: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=255MiB (267MB), run=4021-4021msec 00:15:59.576 WRITE: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s), io=256MiB (269MB), run=4016-4016msec 00:16:00.953 ----------------------------------------------------- 00:16:00.953 Suppressions used: 00:16:00.953 count bytes template 00:16:00.953 1 5 /usr/src/fio/parse.c 00:16:00.953 1 8 libtcmalloc_minimal.so 00:16:00.953 1 904 libcrypto.so 00:16:00.953 ----------------------------------------------------- 00:16:00.953 00:16:00.953 09:53:54 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:00.953 09:53:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:00.953 09:53:54 -- common/autotest_common.sh@10 -- # set +x 00:16:00.953 09:53:54 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:00.953 09:53:54 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:00.953 09:53:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:00.953 09:53:54 -- common/autotest_common.sh@10 -- # set +x 00:16:00.953 09:53:54 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:00.953 09:53:54 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:00.953 09:53:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:00.953 09:53:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:00.953 09:53:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:00.953 09:53:54 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.953 09:53:54 -- common/autotest_common.sh@1320 -- # shift 00:16:00.953 09:53:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:00.953 09:53:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:00.953 09:53:54 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.953 09:53:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:00.953 09:53:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:00.953 09:53:54 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:00.953 09:53:54 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:00.953 09:53:54 -- common/autotest_common.sh@1326 -- # break 00:16:00.953 09:53:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:00.953 09:53:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:01.212 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:01.212 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:01.212 fio-3.35 00:16:01.212 Starting 2 threads 00:16:33.327 00:16:33.327 first_half: (groupid=0, jobs=1): err= 0: pid=72321: Mon Jun 10 09:54:24 2024 00:16:33.327 read: IOPS=2280, BW=9122KiB/s (9341kB/s)(255MiB/28588msec) 00:16:33.328 slat (nsec): min=4469, max=28814, avg=6972.27, stdev=1644.33 00:16:33.328 clat (usec): min=942, max=318888, avg=41259.87, stdev=19027.83 00:16:33.328 lat (usec): min=951, max=318893, avg=41266.84, stdev=19027.93 00:16:33.328 clat percentiles (msec): 00:16:33.328 | 1.00th=[ 6], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 38], 00:16:33.328 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:16:33.328 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 44], 95.00th=[ 51], 00:16:33.328 | 99.00th=[ 140], 99.50th=[ 171], 99.90th=[ 266], 99.95th=[ 296], 00:16:33.328 | 99.99th=[ 309] 00:16:33.328 write: IOPS=3224, BW=12.6MiB/s (13.2MB/s)(256MiB/20325msec); 0 zone resets 00:16:33.328 slat (usec): min=5, max=182, avg= 9.01, stdev= 4.17 00:16:33.328 clat (usec): min=481, max=118068, avg=14764.58, stdev=26765.36 00:16:33.328 lat (usec): min=501, max=118076, avg=14773.58, stdev=26765.49 00:16:33.328 clat percentiles (usec): 00:16:33.328 | 1.00th=[ 1045], 5.00th=[ 1336], 10.00th=[ 1516], 20.00th=[ 1778], 00:16:33.328 | 30.00th=[ 2040], 40.00th=[ 2573], 50.00th=[ 4948], 60.00th=[ 6390], 00:16:33.328 | 70.00th=[ 7963], 80.00th=[ 14353], 90.00th=[ 76022], 95.00th=[ 91751], 00:16:33.328 | 99.00th=[101188], 99.50th=[105382], 99.90th=[112722], 99.95th=[114820], 00:16:33.328 | 99.99th=[115868] 00:16:33.328 bw ( KiB/s): min= 984, max=39592, per=100.00%, avg=20969.24, stdev=9628.46, samples=25 00:16:33.328 iops : min= 246, max= 9898, avg=5242.28, stdev=2407.09, samples=25 00:16:33.328 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.33% 00:16:33.328 lat (msec) : 2=14.18%, 4=9.25%, 10=14.32%, 20=6.72%, 50=47.08% 00:16:33.328 lat (msec) : 100=6.36%, 250=1.65%, 500=0.06% 00:16:33.328 cpu : usr=99.24%, sys=0.22%, ctx=39, majf=0, minf=5577 00:16:33.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:33.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.328 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.328 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.328 second_half: (groupid=0, jobs=1): err= 0: pid=72322: Mon Jun 10 09:54:24 2024 00:16:33.328 read: IOPS=2267, BW=9072KiB/s (9289kB/s)(255MiB/28798msec) 00:16:33.328 slat (nsec): min=4538, max=29864, avg=7083.75, stdev=1720.85 00:16:33.328 clat (usec): min=908, max=315039, avg=40102.47, stdev=17599.00 00:16:33.328 lat (usec): min=917, max=315048, avg=40109.55, stdev=17599.15 00:16:33.328 clat percentiles (msec): 00:16:33.328 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 37], 20.00th=[ 38], 00:16:33.328 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:16:33.328 | 70.00th=[ 40], 80.00th=[ 40], 90.00th=[ 44], 95.00th=[ 49], 00:16:33.328 | 99.00th=[ 140], 99.50th=[ 174], 99.90th=[ 218], 99.95th=[ 222], 00:16:33.328 | 99.99th=[ 309] 00:16:33.328 write: IOPS=2506, BW=9.79MiB/s (10.3MB/s)(256MiB/26150msec); 0 zone resets 00:16:33.328 slat (usec): min=5, max=588, avg= 9.35, stdev= 5.25 00:16:33.328 clat (usec): min=439, max=118142, avg=16255.65, stdev=27325.46 00:16:33.328 lat (usec): min=465, max=118150, avg=16265.00, stdev=27325.73 00:16:33.328 clat percentiles (usec): 00:16:33.328 | 1.00th=[ 971], 5.00th=[ 1237], 10.00th=[ 1418], 20.00th=[ 1762], 00:16:33.328 | 30.00th=[ 2311], 40.00th=[ 4228], 50.00th=[ 5866], 60.00th=[ 7046], 00:16:33.328 | 70.00th=[ 9896], 80.00th=[ 15795], 90.00th=[ 78119], 95.00th=[ 92799], 00:16:33.328 | 99.00th=[102237], 99.50th=[105382], 99.90th=[113771], 99.95th=[114820], 00:16:33.328 | 99.99th=[116917] 00:16:33.328 bw ( KiB/s): min= 976, max=40664, per=93.39%, avg=18725.57, stdev=10245.87, samples=28 00:16:33.328 iops : min= 244, max=10166, avg=4681.36, stdev=2561.44, samples=28 00:16:33.328 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.61% 00:16:33.328 lat (msec) : 2=12.37%, 4=6.63%, 10=15.96%, 20=9.05%, 50=47.51% 00:16:33.328 lat (msec) : 100=6.23%, 250=1.59%, 500=0.01% 00:16:33.328 cpu : usr=99.26%, sys=0.17%, ctx=46, majf=0, minf=5526 00:16:33.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:33.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.328 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.328 issued rwts: total=65311,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.328 00:16:33.328 Run status group 0 (all jobs): 00:16:33.328 READ: bw=17.7MiB/s (18.6MB/s), 9072KiB/s-9122KiB/s (9289kB/s-9341kB/s), io=510MiB (535MB), run=28588-28798msec 00:16:33.328 WRITE: bw=19.6MiB/s (20.5MB/s), 9.79MiB/s-12.6MiB/s (10.3MB/s-13.2MB/s), io=512MiB (537MB), run=20325-26150msec 00:16:33.328 ----------------------------------------------------- 00:16:33.328 Suppressions used: 00:16:33.328 count bytes template 00:16:33.328 2 10 /usr/src/fio/parse.c 00:16:33.328 1 96 /usr/src/fio/iolog.c 00:16:33.328 1 8 libtcmalloc_minimal.so 00:16:33.328 1 904 libcrypto.so 00:16:33.328 ----------------------------------------------------- 00:16:33.328 00:16:33.328 09:54:26 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:33.328 09:54:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:33.328 09:54:26 -- common/autotest_common.sh@10 -- # set +x 00:16:33.328 09:54:26 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:33.328 09:54:26 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:33.328 09:54:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:33.328 09:54:26 -- common/autotest_common.sh@10 -- # set +x 00:16:33.328 09:54:26 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.328 09:54:26 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.328 09:54:26 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:33.328 09:54:26 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:33.328 09:54:26 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:33.328 09:54:26 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:33.328 09:54:26 -- common/autotest_common.sh@1320 -- # shift 00:16:33.328 09:54:26 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:33.328 09:54:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.328 09:54:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:33.328 09:54:26 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:33.328 09:54:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:33.328 09:54:26 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:33.328 09:54:26 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:33.328 09:54:26 -- common/autotest_common.sh@1326 -- # break 00:16:33.328 09:54:26 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:33.328 09:54:26 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.328 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:33.328 fio-3.35 00:16:33.328 Starting 1 thread 00:16:51.443 00:16:51.443 test: (groupid=0, jobs=1): err= 0: pid=72685: Mon Jun 10 09:54:43 2024 00:16:51.443 read: IOPS=6693, BW=26.1MiB/s (27.4MB/s)(255MiB/9741msec) 00:16:51.443 slat (nsec): min=4536, max=33155, avg=6471.93, stdev=1618.26 00:16:51.443 clat (usec): min=817, max=36830, avg=19112.16, stdev=1145.01 00:16:51.443 lat (usec): min=822, max=36835, avg=19118.63, stdev=1145.02 00:16:51.443 clat percentiles (usec): 00:16:51.443 | 1.00th=[18220], 5.00th=[18482], 10.00th=[18482], 20.00th=[18744], 00:16:51.443 | 30.00th=[18744], 40.00th=[18744], 50.00th=[19006], 60.00th=[19006], 00:16:51.443 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19530], 95.00th=[21103], 00:16:51.443 | 99.00th=[24773], 99.50th=[24773], 99.90th=[27395], 99.95th=[32375], 00:16:51.443 | 99.99th=[36439] 00:16:51.443 write: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(256MiB/5632msec); 0 zone resets 00:16:51.443 slat (usec): min=5, max=772, avg= 9.30, stdev= 5.83 00:16:51.443 clat (usec): min=623, max=69594, avg=10939.29, stdev=13404.12 00:16:51.444 lat (usec): min=654, max=69603, avg=10948.59, stdev=13404.13 00:16:51.444 clat percentiles (usec): 00:16:51.444 | 1.00th=[ 938], 5.00th=[ 1139], 10.00th=[ 1270], 20.00th=[ 1450], 00:16:51.444 | 30.00th=[ 1647], 40.00th=[ 2147], 50.00th=[ 7308], 60.00th=[ 8455], 00:16:51.444 | 70.00th=[10028], 80.00th=[12125], 90.00th=[38536], 95.00th=[42206], 00:16:51.444 | 99.00th=[45351], 99.50th=[46400], 99.90th=[49546], 99.95th=[56361], 00:16:51.444 | 99.99th=[67634] 00:16:51.444 bw ( KiB/s): min= 9728, max=64888, per=93.87%, avg=43690.67, stdev=13349.75, samples=12 00:16:51.444 iops : min= 2432, max=16222, avg=10922.67, stdev=3337.44, samples=12 00:16:51.444 lat (usec) : 750=0.03%, 1000=0.85% 00:16:51.444 lat (msec) : 2=18.63%, 4=1.41%, 10=14.19%, 20=53.71%, 50=11.14% 00:16:51.444 lat (msec) : 100=0.04% 00:16:51.444 cpu : usr=98.85%, sys=0.51%, ctx=44, majf=0, minf=5567 00:16:51.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:51.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.444 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.444 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.444 00:16:51.444 Run status group 0 (all jobs): 00:16:51.444 READ: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=255MiB (267MB), run=9741-9741msec 00:16:51.444 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=256MiB (268MB), run=5632-5632msec 00:16:51.444 ----------------------------------------------------- 00:16:51.444 Suppressions used: 00:16:51.444 count bytes template 00:16:51.444 1 5 /usr/src/fio/parse.c 00:16:51.444 2 192 /usr/src/fio/iolog.c 00:16:51.444 1 8 libtcmalloc_minimal.so 00:16:51.444 1 904 libcrypto.so 00:16:51.444 ----------------------------------------------------- 00:16:51.444 00:16:51.444 09:54:45 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:51.444 09:54:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:51.444 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:16:51.444 09:54:45 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:51.444 Remove shared memory files 00:16:51.444 09:54:45 -- ftl/fio.sh@85 -- # remove_shm 00:16:51.444 09:54:45 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:51.444 09:54:45 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:51.444 09:54:45 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:51.444 09:54:45 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56932 /dev/shm/spdk_tgt_trace.pid70907 00:16:51.444 09:54:45 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:51.444 09:54:45 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:51.444 00:16:51.444 real 1m11.389s 00:16:51.444 user 2m39.141s 00:16:51.444 sys 0m3.776s 00:16:51.444 09:54:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.444 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:16:51.444 ************************************ 00:16:51.444 END TEST ftl_fio_basic 00:16:51.444 ************************************ 00:16:51.703 09:54:45 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:51.703 09:54:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:51.703 09:54:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:51.703 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:16:51.703 ************************************ 00:16:51.703 START TEST ftl_bdevperf 00:16:51.703 ************************************ 00:16:51.703 09:54:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:51.703 * Looking for test storage... 00:16:51.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:51.703 09:54:45 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:51.703 09:54:45 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:51.703 09:54:45 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:51.703 09:54:45 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:51.703 09:54:45 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:51.703 09:54:45 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:51.703 09:54:45 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.703 09:54:45 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:51.703 09:54:45 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:51.703 09:54:45 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.703 09:54:45 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.703 09:54:45 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:51.703 09:54:45 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:51.703 09:54:45 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:51.703 09:54:45 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:51.703 09:54:45 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:51.703 09:54:45 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:51.704 09:54:45 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.704 09:54:45 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.704 09:54:45 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:51.704 09:54:45 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:51.704 09:54:45 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:51.704 09:54:45 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:51.704 09:54:45 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:51.704 09:54:45 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:51.704 09:54:45 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:51.704 09:54:45 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:51.704 09:54:45 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:51.704 09:54:45 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@13 -- # use_append= 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:51.704 09:54:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:51.704 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=72933 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:51.704 09:54:45 -- ftl/bdevperf.sh@22 -- # waitforlisten 72933 00:16:51.704 09:54:45 -- common/autotest_common.sh@819 -- # '[' -z 72933 ']' 00:16:51.704 09:54:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.704 09:54:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.704 09:54:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.704 09:54:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.704 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:16:51.704 [2024-06-10 09:54:45.436387] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:51.704 [2024-06-10 09:54:45.436573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72933 ] 00:16:51.963 [2024-06-10 09:54:45.615405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.222 [2024-06-10 09:54:45.800398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.789 09:54:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.790 09:54:46 -- common/autotest_common.sh@852 -- # return 0 00:16:52.790 09:54:46 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:52.790 09:54:46 -- ftl/common.sh@54 -- # local name=nvme0 00:16:52.790 09:54:46 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:52.790 09:54:46 -- ftl/common.sh@56 -- # local size=103424 00:16:52.790 09:54:46 -- ftl/common.sh@59 -- # local base_bdev 00:16:52.790 09:54:46 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:53.048 09:54:46 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:53.048 09:54:46 -- ftl/common.sh@62 -- # local base_size 00:16:53.048 09:54:46 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:53.048 09:54:46 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:16:53.048 09:54:46 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:53.048 09:54:46 -- common/autotest_common.sh@1359 -- # local bs 00:16:53.048 09:54:46 -- common/autotest_common.sh@1360 -- # local nb 00:16:53.048 09:54:46 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:53.307 09:54:46 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:53.307 { 00:16:53.307 "name": "nvme0n1", 00:16:53.307 "aliases": [ 00:16:53.307 "160de8ce-c395-41ab-9ef3-d35106f4a110" 00:16:53.307 ], 00:16:53.307 "product_name": "NVMe disk", 00:16:53.307 "block_size": 4096, 00:16:53.307 "num_blocks": 1310720, 00:16:53.307 "uuid": "160de8ce-c395-41ab-9ef3-d35106f4a110", 00:16:53.307 "assigned_rate_limits": { 00:16:53.307 "rw_ios_per_sec": 0, 00:16:53.307 "rw_mbytes_per_sec": 0, 00:16:53.307 "r_mbytes_per_sec": 0, 00:16:53.307 "w_mbytes_per_sec": 0 00:16:53.307 }, 00:16:53.307 "claimed": true, 00:16:53.307 "claim_type": "read_many_write_one", 00:16:53.307 "zoned": false, 00:16:53.307 "supported_io_types": { 00:16:53.307 "read": true, 00:16:53.307 "write": true, 00:16:53.307 "unmap": true, 00:16:53.307 "write_zeroes": true, 00:16:53.307 "flush": true, 00:16:53.307 "reset": true, 00:16:53.307 "compare": true, 00:16:53.307 "compare_and_write": false, 00:16:53.307 "abort": true, 00:16:53.307 "nvme_admin": true, 00:16:53.307 "nvme_io": true 00:16:53.307 }, 00:16:53.307 "driver_specific": { 00:16:53.307 "nvme": [ 00:16:53.307 { 00:16:53.307 "pci_address": "0000:00:07.0", 00:16:53.307 "trid": { 00:16:53.307 "trtype": "PCIe", 00:16:53.307 "traddr": "0000:00:07.0" 00:16:53.307 }, 00:16:53.307 "ctrlr_data": { 00:16:53.307 "cntlid": 0, 00:16:53.307 "vendor_id": "0x1b36", 00:16:53.307 "model_number": "QEMU NVMe Ctrl", 00:16:53.307 "serial_number": "12341", 00:16:53.307 "firmware_revision": "8.0.0", 00:16:53.307 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:53.307 "oacs": { 00:16:53.307 "security": 0, 00:16:53.307 "format": 1, 00:16:53.307 "firmware": 0, 00:16:53.307 "ns_manage": 1 00:16:53.307 }, 00:16:53.307 "multi_ctrlr": false, 00:16:53.307 "ana_reporting": false 00:16:53.307 }, 00:16:53.307 "vs": { 00:16:53.307 "nvme_version": "1.4" 00:16:53.307 }, 00:16:53.307 "ns_data": { 00:16:53.307 "id": 1, 00:16:53.307 "can_share": false 00:16:53.307 } 00:16:53.307 } 00:16:53.307 ], 00:16:53.307 "mp_policy": "active_passive" 00:16:53.307 } 00:16:53.307 } 00:16:53.307 ]' 00:16:53.307 09:54:46 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:53.307 09:54:47 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:53.307 09:54:47 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:53.307 09:54:47 -- common/autotest_common.sh@1363 -- # nb=1310720 00:16:53.307 09:54:47 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:16:53.307 09:54:47 -- common/autotest_common.sh@1367 -- # echo 5120 00:16:53.307 09:54:47 -- ftl/common.sh@63 -- # base_size=5120 00:16:53.307 09:54:47 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:53.307 09:54:47 -- ftl/common.sh@67 -- # clear_lvols 00:16:53.307 09:54:47 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:53.307 09:54:47 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:53.566 09:54:47 -- ftl/common.sh@28 -- # stores=85d98bb9-5ffc-4133-b7e8-23a756ab554a 00:16:53.566 09:54:47 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:53.566 09:54:47 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85d98bb9-5ffc-4133-b7e8-23a756ab554a 00:16:53.825 09:54:47 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:54.084 09:54:47 -- ftl/common.sh@68 -- # lvs=3a7e568a-254b-4fbc-af0c-94fc37056e53 00:16:54.084 09:54:47 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3a7e568a-254b-4fbc-af0c-94fc37056e53 00:16:54.342 09:54:47 -- ftl/bdevperf.sh@23 -- # split_bdev=eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:54.342 09:54:47 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:54.342 09:54:47 -- ftl/common.sh@35 -- # local name=nvc0 00:16:54.342 09:54:47 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:54.342 09:54:47 -- ftl/common.sh@37 -- # local base_bdev=eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:54.342 09:54:47 -- ftl/common.sh@38 -- # local cache_size= 00:16:54.342 09:54:47 -- ftl/common.sh@41 -- # get_bdev_size eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:54.342 09:54:47 -- common/autotest_common.sh@1357 -- # local bdev_name=eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:54.342 09:54:47 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:54.342 09:54:47 -- common/autotest_common.sh@1359 -- # local bs 00:16:54.342 09:54:47 -- common/autotest_common.sh@1360 -- # local nb 00:16:54.342 09:54:48 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:54.602 09:54:48 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:54.602 { 00:16:54.602 "name": "eb8a1002-726d-47a4-96fe-a3b5d34864f7", 00:16:54.602 "aliases": [ 00:16:54.602 "lvs/nvme0n1p0" 00:16:54.602 ], 00:16:54.602 "product_name": "Logical Volume", 00:16:54.602 "block_size": 4096, 00:16:54.602 "num_blocks": 26476544, 00:16:54.602 "uuid": "eb8a1002-726d-47a4-96fe-a3b5d34864f7", 00:16:54.602 "assigned_rate_limits": { 00:16:54.602 "rw_ios_per_sec": 0, 00:16:54.602 "rw_mbytes_per_sec": 0, 00:16:54.602 "r_mbytes_per_sec": 0, 00:16:54.602 "w_mbytes_per_sec": 0 00:16:54.602 }, 00:16:54.602 "claimed": false, 00:16:54.602 "zoned": false, 00:16:54.602 "supported_io_types": { 00:16:54.602 "read": true, 00:16:54.602 "write": true, 00:16:54.602 "unmap": true, 00:16:54.602 "write_zeroes": true, 00:16:54.602 "flush": false, 00:16:54.602 "reset": true, 00:16:54.602 "compare": false, 00:16:54.602 "compare_and_write": false, 00:16:54.602 "abort": false, 00:16:54.602 "nvme_admin": false, 00:16:54.602 "nvme_io": false 00:16:54.602 }, 00:16:54.602 "driver_specific": { 00:16:54.602 "lvol": { 00:16:54.602 "lvol_store_uuid": "3a7e568a-254b-4fbc-af0c-94fc37056e53", 00:16:54.602 "base_bdev": "nvme0n1", 00:16:54.602 "thin_provision": true, 00:16:54.602 "snapshot": false, 00:16:54.602 "clone": false, 00:16:54.602 "esnap_clone": false 00:16:54.602 } 00:16:54.602 } 00:16:54.602 } 00:16:54.602 ]' 00:16:54.602 09:54:48 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:54.602 09:54:48 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:54.602 09:54:48 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:54.602 09:54:48 -- common/autotest_common.sh@1363 -- # nb=26476544 00:16:54.602 09:54:48 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:16:54.602 09:54:48 -- common/autotest_common.sh@1367 -- # echo 103424 00:16:54.602 09:54:48 -- ftl/common.sh@41 -- # local base_size=5171 00:16:54.602 09:54:48 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:54.602 09:54:48 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:54.861 09:54:48 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:54.861 09:54:48 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:54.861 09:54:48 -- ftl/common.sh@48 -- # get_bdev_size eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:54.861 09:54:48 -- common/autotest_common.sh@1357 -- # local bdev_name=eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:54.861 09:54:48 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:54.861 09:54:48 -- common/autotest_common.sh@1359 -- # local bs 00:16:54.861 09:54:48 -- common/autotest_common.sh@1360 -- # local nb 00:16:54.861 09:54:48 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:55.119 09:54:48 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:55.119 { 00:16:55.119 "name": "eb8a1002-726d-47a4-96fe-a3b5d34864f7", 00:16:55.119 "aliases": [ 00:16:55.119 "lvs/nvme0n1p0" 00:16:55.119 ], 00:16:55.119 "product_name": "Logical Volume", 00:16:55.119 "block_size": 4096, 00:16:55.119 "num_blocks": 26476544, 00:16:55.120 "uuid": "eb8a1002-726d-47a4-96fe-a3b5d34864f7", 00:16:55.120 "assigned_rate_limits": { 00:16:55.120 "rw_ios_per_sec": 0, 00:16:55.120 "rw_mbytes_per_sec": 0, 00:16:55.120 "r_mbytes_per_sec": 0, 00:16:55.120 "w_mbytes_per_sec": 0 00:16:55.120 }, 00:16:55.120 "claimed": false, 00:16:55.120 "zoned": false, 00:16:55.120 "supported_io_types": { 00:16:55.120 "read": true, 00:16:55.120 "write": true, 00:16:55.120 "unmap": true, 00:16:55.120 "write_zeroes": true, 00:16:55.120 "flush": false, 00:16:55.120 "reset": true, 00:16:55.120 "compare": false, 00:16:55.120 "compare_and_write": false, 00:16:55.120 "abort": false, 00:16:55.120 "nvme_admin": false, 00:16:55.120 "nvme_io": false 00:16:55.120 }, 00:16:55.120 "driver_specific": { 00:16:55.120 "lvol": { 00:16:55.120 "lvol_store_uuid": "3a7e568a-254b-4fbc-af0c-94fc37056e53", 00:16:55.120 "base_bdev": "nvme0n1", 00:16:55.120 "thin_provision": true, 00:16:55.120 "snapshot": false, 00:16:55.120 "clone": false, 00:16:55.120 "esnap_clone": false 00:16:55.120 } 00:16:55.120 } 00:16:55.120 } 00:16:55.120 ]' 00:16:55.120 09:54:48 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:55.120 09:54:48 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:55.120 09:54:48 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:55.378 09:54:48 -- common/autotest_common.sh@1363 -- # nb=26476544 00:16:55.378 09:54:48 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:16:55.378 09:54:48 -- common/autotest_common.sh@1367 -- # echo 103424 00:16:55.378 09:54:48 -- ftl/common.sh@48 -- # cache_size=5171 00:16:55.378 09:54:48 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:55.378 09:54:49 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:55.378 09:54:49 -- ftl/bdevperf.sh@26 -- # get_bdev_size eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:55.378 09:54:49 -- common/autotest_common.sh@1357 -- # local bdev_name=eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:55.378 09:54:49 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:55.378 09:54:49 -- common/autotest_common.sh@1359 -- # local bs 00:16:55.378 09:54:49 -- common/autotest_common.sh@1360 -- # local nb 00:16:55.378 09:54:49 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eb8a1002-726d-47a4-96fe-a3b5d34864f7 00:16:55.637 09:54:49 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:55.637 { 00:16:55.637 "name": "eb8a1002-726d-47a4-96fe-a3b5d34864f7", 00:16:55.637 "aliases": [ 00:16:55.637 "lvs/nvme0n1p0" 00:16:55.637 ], 00:16:55.637 "product_name": "Logical Volume", 00:16:55.637 "block_size": 4096, 00:16:55.637 "num_blocks": 26476544, 00:16:55.637 "uuid": "eb8a1002-726d-47a4-96fe-a3b5d34864f7", 00:16:55.637 "assigned_rate_limits": { 00:16:55.637 "rw_ios_per_sec": 0, 00:16:55.637 "rw_mbytes_per_sec": 0, 00:16:55.637 "r_mbytes_per_sec": 0, 00:16:55.637 "w_mbytes_per_sec": 0 00:16:55.637 }, 00:16:55.637 "claimed": false, 00:16:55.637 "zoned": false, 00:16:55.637 "supported_io_types": { 00:16:55.637 "read": true, 00:16:55.637 "write": true, 00:16:55.637 "unmap": true, 00:16:55.637 "write_zeroes": true, 00:16:55.637 "flush": false, 00:16:55.637 "reset": true, 00:16:55.637 "compare": false, 00:16:55.637 "compare_and_write": false, 00:16:55.637 "abort": false, 00:16:55.637 "nvme_admin": false, 00:16:55.637 "nvme_io": false 00:16:55.637 }, 00:16:55.637 "driver_specific": { 00:16:55.637 "lvol": { 00:16:55.637 "lvol_store_uuid": "3a7e568a-254b-4fbc-af0c-94fc37056e53", 00:16:55.637 "base_bdev": "nvme0n1", 00:16:55.637 "thin_provision": true, 00:16:55.637 "snapshot": false, 00:16:55.637 "clone": false, 00:16:55.637 "esnap_clone": false 00:16:55.637 } 00:16:55.637 } 00:16:55.637 } 00:16:55.637 ]' 00:16:55.637 09:54:49 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:55.896 09:54:49 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:55.896 09:54:49 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:55.896 09:54:49 -- common/autotest_common.sh@1363 -- # nb=26476544 00:16:55.896 09:54:49 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:16:55.897 09:54:49 -- common/autotest_common.sh@1367 -- # echo 103424 00:16:55.897 09:54:49 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:55.897 09:54:49 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d eb8a1002-726d-47a4-96fe-a3b5d34864f7 -c nvc0n1p0 --l2p_dram_limit 20 00:16:56.157 [2024-06-10 09:54:49.687546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.687624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:56.157 [2024-06-10 09:54:49.687647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:56.157 [2024-06-10 09:54:49.687674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.687780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.687797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.157 [2024-06-10 09:54:49.687811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:56.157 [2024-06-10 09:54:49.687822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.687849] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:56.157 [2024-06-10 09:54:49.688814] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:56.157 [2024-06-10 09:54:49.688849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.688862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.157 [2024-06-10 09:54:49.688876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:16:56.157 [2024-06-10 09:54:49.688887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.689016] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9ee431ec-b9e0-4e74-8bc2-e042c054f8b9 00:16:56.157 [2024-06-10 09:54:49.690217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.690255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:56.157 [2024-06-10 09:54:49.690287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:56.157 [2024-06-10 09:54:49.690301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.695052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.695113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.157 [2024-06-10 09:54:49.695139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.693 ms 00:16:56.157 [2024-06-10 09:54:49.695156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.695267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.695288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.157 [2024-06-10 09:54:49.695300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:16:56.157 [2024-06-10 09:54:49.695316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.695422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.695441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:56.157 [2024-06-10 09:54:49.695454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:56.157 [2024-06-10 09:54:49.695467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.695514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:56.157 [2024-06-10 09:54:49.699840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.699887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.157 [2024-06-10 09:54:49.699922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.336 ms 00:16:56.157 [2024-06-10 09:54:49.699932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.699990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.700004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:56.157 [2024-06-10 09:54:49.700018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:56.157 [2024-06-10 09:54:49.700029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.700085] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:56.157 [2024-06-10 09:54:49.700299] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:56.157 [2024-06-10 09:54:49.700328] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:56.157 [2024-06-10 09:54:49.700343] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:56.157 [2024-06-10 09:54:49.700360] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:56.157 [2024-06-10 09:54:49.700374] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:56.157 [2024-06-10 09:54:49.700390] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:56.157 [2024-06-10 09:54:49.700402] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:56.157 [2024-06-10 09:54:49.700415] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:56.157 [2024-06-10 09:54:49.700427] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:56.157 [2024-06-10 09:54:49.700444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.700456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:56.157 [2024-06-10 09:54:49.700471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:16:56.157 [2024-06-10 09:54:49.700482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.157 [2024-06-10 09:54:49.700557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.157 [2024-06-10 09:54:49.700572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:56.157 [2024-06-10 09:54:49.700586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:56.158 [2024-06-10 09:54:49.700597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.158 [2024-06-10 09:54:49.700690] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:56.158 [2024-06-10 09:54:49.700708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:56.158 [2024-06-10 09:54:49.700723] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.158 [2024-06-10 09:54:49.700735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.158 [2024-06-10 09:54:49.700749] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:56.158 [2024-06-10 09:54:49.700759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:56.158 [2024-06-10 09:54:49.700772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:56.158 [2024-06-10 09:54:49.700784] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:56.158 [2024-06-10 09:54:49.700811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:56.158 [2024-06-10 09:54:49.700821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.158 [2024-06-10 09:54:49.700834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:56.158 [2024-06-10 09:54:49.700845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:56.158 [2024-06-10 09:54:49.700857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.158 [2024-06-10 09:54:49.700867] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:56.158 [2024-06-10 09:54:49.700880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:56.158 [2024-06-10 09:54:49.700890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.158 [2024-06-10 09:54:49.700904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:56.158 [2024-06-10 09:54:49.700915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:56.158 [2024-06-10 09:54:49.700927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.158 [2024-06-10 09:54:49.700937] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:56.158 [2024-06-10 09:54:49.700949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:56.158 [2024-06-10 09:54:49.700960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:56.158 [2024-06-10 09:54:49.700972] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:56.158 [2024-06-10 09:54:49.700997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:56.158 [2024-06-10 09:54:49.701009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.158 [2024-06-10 09:54:49.701019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:56.158 [2024-06-10 09:54:49.701031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:56.158 [2024-06-10 09:54:49.701041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.158 [2024-06-10 09:54:49.701053] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:56.158 [2024-06-10 09:54:49.701063] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:56.158 [2024-06-10 09:54:49.701074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.158 [2024-06-10 09:54:49.701084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:56.158 [2024-06-10 09:54:49.701098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:56.158 [2024-06-10 09:54:49.701107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.158 [2024-06-10 09:54:49.701120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:56.158 [2024-06-10 09:54:49.701130] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:56.158 [2024-06-10 09:54:49.701158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.158 [2024-06-10 09:54:49.701170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:56.158 [2024-06-10 09:54:49.701183] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:56.158 [2024-06-10 09:54:49.701193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.158 [2024-06-10 09:54:49.701206] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:56.158 [2024-06-10 09:54:49.701217] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:56.158 [2024-06-10 09:54:49.701230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.158 [2024-06-10 09:54:49.701241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.158 [2024-06-10 09:54:49.701254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:56.158 [2024-06-10 09:54:49.701264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:56.158 [2024-06-10 09:54:49.701276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:56.158 [2024-06-10 09:54:49.701286] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:56.158 [2024-06-10 09:54:49.701300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:56.158 [2024-06-10 09:54:49.701310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:56.158 [2024-06-10 09:54:49.701323] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:56.158 [2024-06-10 09:54:49.701337] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.158 [2024-06-10 09:54:49.701352] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:56.158 [2024-06-10 09:54:49.701363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:56.158 [2024-06-10 09:54:49.701376] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:56.158 [2024-06-10 09:54:49.701387] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:56.158 [2024-06-10 09:54:49.701400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:56.158 [2024-06-10 09:54:49.701411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:56.158 [2024-06-10 09:54:49.701424] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:56.158 [2024-06-10 09:54:49.701435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:56.158 [2024-06-10 09:54:49.701447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:56.158 [2024-06-10 09:54:49.701458] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:56.158 [2024-06-10 09:54:49.701473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:56.158 [2024-06-10 09:54:49.701484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:56.158 [2024-06-10 09:54:49.701499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:56.158 [2024-06-10 09:54:49.701510] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:56.158 [2024-06-10 09:54:49.701524] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.158 [2024-06-10 09:54:49.701538] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:56.158 [2024-06-10 09:54:49.701560] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:56.158 [2024-06-10 09:54:49.701571] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:56.158 [2024-06-10 09:54:49.701584] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:56.158 [2024-06-10 09:54:49.701596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.158 [2024-06-10 09:54:49.701610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:56.158 [2024-06-10 09:54:49.701622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:16:56.158 [2024-06-10 09:54:49.701635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.158 [2024-06-10 09:54:49.718727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.158 [2024-06-10 09:54:49.718794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.158 [2024-06-10 09:54:49.718812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.029 ms 00:16:56.158 [2024-06-10 09:54:49.718824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.158 [2024-06-10 09:54:49.718928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.158 [2024-06-10 09:54:49.718957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.158 [2024-06-10 09:54:49.718969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:56.158 [2024-06-10 09:54:49.718980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.158 [2024-06-10 09:54:49.767331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.158 [2024-06-10 09:54:49.767395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.158 [2024-06-10 09:54:49.767417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.272 ms 00:16:56.158 [2024-06-10 09:54:49.767431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.158 [2024-06-10 09:54:49.767499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.158 [2024-06-10 09:54:49.767518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.158 [2024-06-10 09:54:49.767533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.158 [2024-06-10 09:54:49.767549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.158 [2024-06-10 09:54:49.767938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.158 [2024-06-10 09:54:49.767972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.158 [2024-06-10 09:54:49.767987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:16:56.158 [2024-06-10 09:54:49.768000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.158 [2024-06-10 09:54:49.768158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.159 [2024-06-10 09:54:49.768246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.159 [2024-06-10 09:54:49.768261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:16:56.159 [2024-06-10 09:54:49.768275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.159 [2024-06-10 09:54:49.785503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.159 [2024-06-10 09:54:49.785562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.159 [2024-06-10 09:54:49.785581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.200 ms 00:16:56.159 [2024-06-10 09:54:49.785596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.159 [2024-06-10 09:54:49.799851] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:56.159 [2024-06-10 09:54:49.804970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.159 [2024-06-10 09:54:49.805029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:56.159 [2024-06-10 09:54:49.805049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.229 ms 00:16:56.159 [2024-06-10 09:54:49.805061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.159 [2024-06-10 09:54:49.870433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.159 [2024-06-10 09:54:49.870562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:56.159 [2024-06-10 09:54:49.870585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.279 ms 00:16:56.159 [2024-06-10 09:54:49.870597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.159 [2024-06-10 09:54:49.870679] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:56.159 [2024-06-10 09:54:49.870701] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:58.691 [2024-06-10 09:54:51.918849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:51.918928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:58.691 [2024-06-10 09:54:51.918951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2048.177 ms 00:16:58.691 [2024-06-10 09:54:51.918963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:51.919240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:51.919263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.691 [2024-06-10 09:54:51.919278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:16:58.691 [2024-06-10 09:54:51.919289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:51.950323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:51.950376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:58.691 [2024-06-10 09:54:51.950394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.941 ms 00:16:58.691 [2024-06-10 09:54:51.950406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:51.979399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:51.979446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:58.691 [2024-06-10 09:54:51.979470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.940 ms 00:16:58.691 [2024-06-10 09:54:51.979482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:51.979883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:51.979902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.691 [2024-06-10 09:54:51.979916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:16:58.691 [2024-06-10 09:54:51.979929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:52.055591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:52.055661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:58.691 [2024-06-10 09:54:52.055685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.594 ms 00:16:58.691 [2024-06-10 09:54:52.055698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:52.087298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:52.087358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:58.691 [2024-06-10 09:54:52.087380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.507 ms 00:16:58.691 [2024-06-10 09:54:52.087393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:52.089423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:52.089473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:58.691 [2024-06-10 09:54:52.089493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:16:58.691 [2024-06-10 09:54:52.089505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:52.119998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:52.120051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:58.691 [2024-06-10 09:54:52.120069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.415 ms 00:16:58.691 [2024-06-10 09:54:52.120081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:52.120176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:52.120195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.691 [2024-06-10 09:54:52.120211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:58.691 [2024-06-10 09:54:52.120223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:52.120344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.691 [2024-06-10 09:54:52.120364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:58.691 [2024-06-10 09:54:52.120379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:58.691 [2024-06-10 09:54:52.120391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.691 [2024-06-10 09:54:52.121419] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2433.403 ms, result 0 00:16:58.691 { 00:16:58.691 "name": "ftl0", 00:16:58.691 "uuid": "9ee431ec-b9e0-4e74-8bc2-e042c054f8b9" 00:16:58.691 } 00:16:58.691 09:54:52 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:58.691 09:54:52 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:58.691 09:54:52 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:58.691 09:54:52 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:58.950 [2024-06-10 09:54:52.509859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:58.950 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:58.950 Zero copy mechanism will not be used. 00:16:58.950 Running I/O for 4 seconds... 00:17:03.137 00:17:03.137 Latency(us) 00:17:03.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.137 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:03.137 ftl0 : 4.00 1771.21 117.62 0.00 0.00 591.75 243.90 997.93 00:17:03.137 =================================================================================================================== 00:17:03.137 Total : 1771.21 117.62 0.00 0.00 591.75 243.90 997.93 00:17:03.137 [2024-06-10 09:54:56.520284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:03.137 0 00:17:03.137 09:54:56 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:03.137 [2024-06-10 09:54:56.638704] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:03.137 Running I/O for 4 seconds... 00:17:07.327 00:17:07.327 Latency(us) 00:17:07.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.327 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.327 ftl0 : 4.02 7203.26 28.14 0.00 0.00 17717.95 366.78 36223.53 00:17:07.327 =================================================================================================================== 00:17:07.327 Total : 7203.26 28.14 0.00 0.00 17717.95 0.00 36223.53 00:17:07.327 0 00:17:07.327 [2024-06-10 09:55:00.670325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:07.327 09:55:00 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:07.327 [2024-06-10 09:55:00.816036] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:07.327 Running I/O for 4 seconds... 00:17:11.516 00:17:11.516 Latency(us) 00:17:11.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.516 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:11.516 Verification LBA range: start 0x0 length 0x1400000 00:17:11.516 ftl0 : 4.01 8815.38 34.44 0.00 0.00 14479.97 211.32 30265.72 00:17:11.516 =================================================================================================================== 00:17:11.516 Total : 8815.38 34.44 0.00 0.00 14479.97 0.00 30265.72 00:17:11.516 [2024-06-10 09:55:04.843469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:11.516 0 00:17:11.516 09:55:04 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:11.516 [2024-06-10 09:55:05.109495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.516 [2024-06-10 09:55:05.109859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:11.516 [2024-06-10 09:55:05.109996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:11.516 [2024-06-10 09:55:05.110082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.516 [2024-06-10 09:55:05.110248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:11.516 [2024-06-10 09:55:05.114046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.516 [2024-06-10 09:55:05.114228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:11.516 [2024-06-10 09:55:05.114317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.651 ms 00:17:11.516 [2024-06-10 09:55:05.114407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.516 [2024-06-10 09:55:05.116197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.516 [2024-06-10 09:55:05.116324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:11.516 [2024-06-10 09:55:05.116418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:17:11.516 [2024-06-10 09:55:05.116499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.290788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.783 [2024-06-10 09:55:05.291032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:11.783 [2024-06-10 09:55:05.291175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 174.174 ms 00:17:11.783 [2024-06-10 09:55:05.291271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.297335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.783 [2024-06-10 09:55:05.297455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:11.783 [2024-06-10 09:55:05.297527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.921 ms 00:17:11.783 [2024-06-10 09:55:05.297635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.324895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.783 [2024-06-10 09:55:05.324951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:11.783 [2024-06-10 09:55:05.324967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.170 ms 00:17:11.783 [2024-06-10 09:55:05.324981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.342986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.783 [2024-06-10 09:55:05.343045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:11.783 [2024-06-10 09:55:05.343062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.950 ms 00:17:11.783 [2024-06-10 09:55:05.343075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.343260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.783 [2024-06-10 09:55:05.343286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:11.783 [2024-06-10 09:55:05.343313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:11.783 [2024-06-10 09:55:05.343325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.372350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.783 [2024-06-10 09:55:05.372436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:11.783 [2024-06-10 09:55:05.372456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.971 ms 00:17:11.783 [2024-06-10 09:55:05.372470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.400517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.783 [2024-06-10 09:55:05.400557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:11.783 [2024-06-10 09:55:05.400573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.950 ms 00:17:11.783 [2024-06-10 09:55:05.400587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.429839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.783 [2024-06-10 09:55:05.429887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:11.783 [2024-06-10 09:55:05.429904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.211 ms 00:17:11.783 [2024-06-10 09:55:05.429916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.461900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.783 [2024-06-10 09:55:05.461955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:11.783 [2024-06-10 09:55:05.461974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.856 ms 00:17:11.783 [2024-06-10 09:55:05.461988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.783 [2024-06-10 09:55:05.462053] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:11.783 [2024-06-10 09:55:05.462099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:11.783 [2024-06-10 09:55:05.462879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.462893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.462905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.462918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.462931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.462944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.462956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.462972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.462997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:11.784 [2024-06-10 09:55:05.463777] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:11.784 [2024-06-10 09:55:05.463789] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ee431ec-b9e0-4e74-8bc2-e042c054f8b9 00:17:11.784 [2024-06-10 09:55:05.463804] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:11.784 [2024-06-10 09:55:05.463814] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:11.784 [2024-06-10 09:55:05.463826] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:11.784 [2024-06-10 09:55:05.463837] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:11.784 [2024-06-10 09:55:05.463864] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:11.784 [2024-06-10 09:55:05.463875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:11.784 [2024-06-10 09:55:05.463887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:11.784 [2024-06-10 09:55:05.463896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:11.784 [2024-06-10 09:55:05.463907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:11.784 [2024-06-10 09:55:05.463918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.784 [2024-06-10 09:55:05.463933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:11.784 [2024-06-10 09:55:05.463945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:17:11.784 [2024-06-10 09:55:05.463957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.784 [2024-06-10 09:55:05.480945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.784 [2024-06-10 09:55:05.480986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:11.784 [2024-06-10 09:55:05.481002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.928 ms 00:17:11.784 [2024-06-10 09:55:05.481017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.784 [2024-06-10 09:55:05.481289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.784 [2024-06-10 09:55:05.481310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:11.784 [2024-06-10 09:55:05.481323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:17:11.784 [2024-06-10 09:55:05.481335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.784 [2024-06-10 09:55:05.527572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.784 [2024-06-10 09:55:05.527660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.784 [2024-06-10 09:55:05.527694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.784 [2024-06-10 09:55:05.527706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.784 [2024-06-10 09:55:05.527783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.784 [2024-06-10 09:55:05.527800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.784 [2024-06-10 09:55:05.527828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.784 [2024-06-10 09:55:05.527856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.784 [2024-06-10 09:55:05.527965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.784 [2024-06-10 09:55:05.527993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.784 [2024-06-10 09:55:05.528007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.784 [2024-06-10 09:55:05.528022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.784 [2024-06-10 09:55:05.528046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.784 [2024-06-10 09:55:05.528074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.784 [2024-06-10 09:55:05.528085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.784 [2024-06-10 09:55:05.528098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.069 [2024-06-10 09:55:05.623135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.069 [2024-06-10 09:55:05.623203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.069 [2024-06-10 09:55:05.623223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.069 [2024-06-10 09:55:05.623236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.069 [2024-06-10 09:55:05.661871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.069 [2024-06-10 09:55:05.661925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.069 [2024-06-10 09:55:05.661944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.069 [2024-06-10 09:55:05.661958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.069 [2024-06-10 09:55:05.662101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.069 [2024-06-10 09:55:05.662137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.069 [2024-06-10 09:55:05.662148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.069 [2024-06-10 09:55:05.662181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.069 [2024-06-10 09:55:05.662274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.069 [2024-06-10 09:55:05.662294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.069 [2024-06-10 09:55:05.662309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.069 [2024-06-10 09:55:05.662321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.069 [2024-06-10 09:55:05.662431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.069 [2024-06-10 09:55:05.662452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.069 [2024-06-10 09:55:05.662465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.069 [2024-06-10 09:55:05.662477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.069 [2024-06-10 09:55:05.662526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.069 [2024-06-10 09:55:05.662546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:12.069 [2024-06-10 09:55:05.662558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.069 [2024-06-10 09:55:05.662587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.069 [2024-06-10 09:55:05.662643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.069 [2024-06-10 09:55:05.662659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.069 [2024-06-10 09:55:05.662670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.069 [2024-06-10 09:55:05.662683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.069 [2024-06-10 09:55:05.662750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.069 [2024-06-10 09:55:05.662770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.069 [2024-06-10 09:55:05.662785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.069 [2024-06-10 09:55:05.662798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.069 [2024-06-10 09:55:05.662951] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 553.443 ms, result 0 00:17:12.069 true 00:17:12.069 09:55:05 -- ftl/bdevperf.sh@37 -- # killprocess 72933 00:17:12.069 09:55:05 -- common/autotest_common.sh@926 -- # '[' -z 72933 ']' 00:17:12.069 09:55:05 -- common/autotest_common.sh@930 -- # kill -0 72933 00:17:12.069 09:55:05 -- common/autotest_common.sh@931 -- # uname 00:17:12.069 09:55:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.069 09:55:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72933 00:17:12.069 killing process with pid 72933 00:17:12.069 Received shutdown signal, test time was about 4.000000 seconds 00:17:12.069 00:17:12.069 Latency(us) 00:17:12.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.069 =================================================================================================================== 00:17:12.069 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.069 09:55:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:12.069 09:55:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:12.069 09:55:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72933' 00:17:12.069 09:55:05 -- common/autotest_common.sh@945 -- # kill 72933 00:17:12.069 09:55:05 -- common/autotest_common.sh@950 -- # wait 72933 00:17:15.357 09:55:09 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:15.357 09:55:09 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:15.357 09:55:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:15.357 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:17:15.357 Remove shared memory files 00:17:15.357 09:55:09 -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:15.357 09:55:09 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:15.357 09:55:09 -- ftl/common.sh@205 -- # rm -f rm -f 00:17:15.357 09:55:09 -- ftl/common.sh@206 -- # rm -f rm -f 00:17:15.616 09:55:09 -- ftl/common.sh@207 -- # rm -f rm -f 00:17:15.616 09:55:09 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:15.616 09:55:09 -- ftl/common.sh@209 -- # rm -f rm -f 00:17:15.616 00:17:15.616 real 0m23.903s 00:17:15.616 user 0m27.131s 00:17:15.616 sys 0m1.066s 00:17:15.616 09:55:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.616 ************************************ 00:17:15.616 END TEST ftl_bdevperf 00:17:15.616 ************************************ 00:17:15.616 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:17:15.616 09:55:09 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:17:15.616 09:55:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:15.616 09:55:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:15.616 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:17:15.616 ************************************ 00:17:15.616 START TEST ftl_trim 00:17:15.616 ************************************ 00:17:15.616 09:55:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:17:15.616 * Looking for test storage... 00:17:15.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.616 09:55:09 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:15.616 09:55:09 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:15.616 09:55:09 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.616 09:55:09 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.616 09:55:09 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:15.616 09:55:09 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:15.616 09:55:09 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.616 09:55:09 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:15.616 09:55:09 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:15.616 09:55:09 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.616 09:55:09 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.616 09:55:09 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:15.616 09:55:09 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:15.616 09:55:09 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.616 09:55:09 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.616 09:55:09 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:15.616 09:55:09 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:15.616 09:55:09 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.616 09:55:09 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.616 09:55:09 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:15.616 09:55:09 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:15.616 09:55:09 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.616 09:55:09 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.616 09:55:09 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.616 09:55:09 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.616 09:55:09 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:15.616 09:55:09 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:15.616 09:55:09 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.616 09:55:09 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.616 09:55:09 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.616 09:55:09 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:17:15.616 09:55:09 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:17:15.616 09:55:09 -- ftl/trim.sh@25 -- # timeout=240 00:17:15.616 09:55:09 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:15.616 09:55:09 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:15.616 09:55:09 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:15.616 09:55:09 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:15.616 09:55:09 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:15.616 09:55:09 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:15.616 09:55:09 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:15.616 09:55:09 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:15.616 09:55:09 -- ftl/trim.sh@40 -- # svcpid=73312 00:17:15.616 09:55:09 -- ftl/trim.sh@41 -- # waitforlisten 73312 00:17:15.616 09:55:09 -- common/autotest_common.sh@819 -- # '[' -z 73312 ']' 00:17:15.616 09:55:09 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:15.616 09:55:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.616 09:55:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:15.616 09:55:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.616 09:55:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:15.616 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:17:15.616 [2024-06-10 09:55:09.371199] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:15.616 [2024-06-10 09:55:09.371327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73312 ] 00:17:15.875 [2024-06-10 09:55:09.532671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:16.133 [2024-06-10 09:55:09.707500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:16.133 [2024-06-10 09:55:09.707897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.133 [2024-06-10 09:55:09.708005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.133 [2024-06-10 09:55:09.708021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.520 09:55:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:17.520 09:55:11 -- common/autotest_common.sh@852 -- # return 0 00:17:17.520 09:55:11 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:17.520 09:55:11 -- ftl/common.sh@54 -- # local name=nvme0 00:17:17.520 09:55:11 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:17.520 09:55:11 -- ftl/common.sh@56 -- # local size=103424 00:17:17.520 09:55:11 -- ftl/common.sh@59 -- # local base_bdev 00:17:17.520 09:55:11 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:17.778 09:55:11 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:17.778 09:55:11 -- ftl/common.sh@62 -- # local base_size 00:17:17.778 09:55:11 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:17.778 09:55:11 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:17:17.778 09:55:11 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:17.778 09:55:11 -- common/autotest_common.sh@1359 -- # local bs 00:17:17.778 09:55:11 -- common/autotest_common.sh@1360 -- # local nb 00:17:17.779 09:55:11 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:18.037 09:55:11 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:18.037 { 00:17:18.037 "name": "nvme0n1", 00:17:18.037 "aliases": [ 00:17:18.037 "71c5b4c5-748a-4b3e-b371-4d2d63c8ef99" 00:17:18.037 ], 00:17:18.037 "product_name": "NVMe disk", 00:17:18.037 "block_size": 4096, 00:17:18.037 "num_blocks": 1310720, 00:17:18.037 "uuid": "71c5b4c5-748a-4b3e-b371-4d2d63c8ef99", 00:17:18.037 "assigned_rate_limits": { 00:17:18.037 "rw_ios_per_sec": 0, 00:17:18.037 "rw_mbytes_per_sec": 0, 00:17:18.037 "r_mbytes_per_sec": 0, 00:17:18.037 "w_mbytes_per_sec": 0 00:17:18.037 }, 00:17:18.037 "claimed": true, 00:17:18.037 "claim_type": "read_many_write_one", 00:17:18.037 "zoned": false, 00:17:18.037 "supported_io_types": { 00:17:18.037 "read": true, 00:17:18.037 "write": true, 00:17:18.037 "unmap": true, 00:17:18.037 "write_zeroes": true, 00:17:18.037 "flush": true, 00:17:18.037 "reset": true, 00:17:18.037 "compare": true, 00:17:18.037 "compare_and_write": false, 00:17:18.037 "abort": true, 00:17:18.037 "nvme_admin": true, 00:17:18.037 "nvme_io": true 00:17:18.037 }, 00:17:18.037 "driver_specific": { 00:17:18.037 "nvme": [ 00:17:18.037 { 00:17:18.037 "pci_address": "0000:00:07.0", 00:17:18.037 "trid": { 00:17:18.037 "trtype": "PCIe", 00:17:18.037 "traddr": "0000:00:07.0" 00:17:18.037 }, 00:17:18.037 "ctrlr_data": { 00:17:18.037 "cntlid": 0, 00:17:18.037 "vendor_id": "0x1b36", 00:17:18.037 "model_number": "QEMU NVMe Ctrl", 00:17:18.037 "serial_number": "12341", 00:17:18.037 "firmware_revision": "8.0.0", 00:17:18.037 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:18.037 "oacs": { 00:17:18.037 "security": 0, 00:17:18.037 "format": 1, 00:17:18.037 "firmware": 0, 00:17:18.037 "ns_manage": 1 00:17:18.037 }, 00:17:18.037 "multi_ctrlr": false, 00:17:18.037 "ana_reporting": false 00:17:18.037 }, 00:17:18.037 "vs": { 00:17:18.037 "nvme_version": "1.4" 00:17:18.037 }, 00:17:18.037 "ns_data": { 00:17:18.037 "id": 1, 00:17:18.037 "can_share": false 00:17:18.037 } 00:17:18.037 } 00:17:18.037 ], 00:17:18.037 "mp_policy": "active_passive" 00:17:18.037 } 00:17:18.037 } 00:17:18.037 ]' 00:17:18.037 09:55:11 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:18.037 09:55:11 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:18.037 09:55:11 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:18.037 09:55:11 -- common/autotest_common.sh@1363 -- # nb=1310720 00:17:18.037 09:55:11 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:17:18.037 09:55:11 -- common/autotest_common.sh@1367 -- # echo 5120 00:17:18.037 09:55:11 -- ftl/common.sh@63 -- # base_size=5120 00:17:18.037 09:55:11 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:18.037 09:55:11 -- ftl/common.sh@67 -- # clear_lvols 00:17:18.037 09:55:11 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:18.037 09:55:11 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:18.295 09:55:11 -- ftl/common.sh@28 -- # stores=3a7e568a-254b-4fbc-af0c-94fc37056e53 00:17:18.295 09:55:11 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:18.295 09:55:11 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a7e568a-254b-4fbc-af0c-94fc37056e53 00:17:18.553 09:55:12 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:18.811 09:55:12 -- ftl/common.sh@68 -- # lvs=d85a82cd-931e-43be-a5d5-485e66c464d4 00:17:18.811 09:55:12 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d85a82cd-931e-43be-a5d5-485e66c464d4 00:17:19.069 09:55:12 -- ftl/trim.sh@43 -- # split_bdev=b19eb64c-6595-421e-b532-33e79dda234d 00:17:19.069 09:55:12 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 b19eb64c-6595-421e-b532-33e79dda234d 00:17:19.069 09:55:12 -- ftl/common.sh@35 -- # local name=nvc0 00:17:19.069 09:55:12 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:19.069 09:55:12 -- ftl/common.sh@37 -- # local base_bdev=b19eb64c-6595-421e-b532-33e79dda234d 00:17:19.069 09:55:12 -- ftl/common.sh@38 -- # local cache_size= 00:17:19.069 09:55:12 -- ftl/common.sh@41 -- # get_bdev_size b19eb64c-6595-421e-b532-33e79dda234d 00:17:19.069 09:55:12 -- common/autotest_common.sh@1357 -- # local bdev_name=b19eb64c-6595-421e-b532-33e79dda234d 00:17:19.069 09:55:12 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:19.069 09:55:12 -- common/autotest_common.sh@1359 -- # local bs 00:17:19.069 09:55:12 -- common/autotest_common.sh@1360 -- # local nb 00:17:19.069 09:55:12 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b19eb64c-6595-421e-b532-33e79dda234d 00:17:19.327 09:55:12 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:19.327 { 00:17:19.327 "name": "b19eb64c-6595-421e-b532-33e79dda234d", 00:17:19.327 "aliases": [ 00:17:19.327 "lvs/nvme0n1p0" 00:17:19.327 ], 00:17:19.327 "product_name": "Logical Volume", 00:17:19.327 "block_size": 4096, 00:17:19.327 "num_blocks": 26476544, 00:17:19.327 "uuid": "b19eb64c-6595-421e-b532-33e79dda234d", 00:17:19.327 "assigned_rate_limits": { 00:17:19.327 "rw_ios_per_sec": 0, 00:17:19.327 "rw_mbytes_per_sec": 0, 00:17:19.327 "r_mbytes_per_sec": 0, 00:17:19.327 "w_mbytes_per_sec": 0 00:17:19.327 }, 00:17:19.327 "claimed": false, 00:17:19.327 "zoned": false, 00:17:19.327 "supported_io_types": { 00:17:19.327 "read": true, 00:17:19.327 "write": true, 00:17:19.327 "unmap": true, 00:17:19.327 "write_zeroes": true, 00:17:19.327 "flush": false, 00:17:19.327 "reset": true, 00:17:19.327 "compare": false, 00:17:19.327 "compare_and_write": false, 00:17:19.327 "abort": false, 00:17:19.327 "nvme_admin": false, 00:17:19.327 "nvme_io": false 00:17:19.327 }, 00:17:19.327 "driver_specific": { 00:17:19.327 "lvol": { 00:17:19.327 "lvol_store_uuid": "d85a82cd-931e-43be-a5d5-485e66c464d4", 00:17:19.327 "base_bdev": "nvme0n1", 00:17:19.327 "thin_provision": true, 00:17:19.327 "snapshot": false, 00:17:19.327 "clone": false, 00:17:19.327 "esnap_clone": false 00:17:19.327 } 00:17:19.327 } 00:17:19.327 } 00:17:19.327 ]' 00:17:19.327 09:55:12 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:19.327 09:55:12 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:19.327 09:55:12 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:19.327 09:55:13 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:19.327 09:55:13 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:19.327 09:55:13 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:19.327 09:55:13 -- ftl/common.sh@41 -- # local base_size=5171 00:17:19.327 09:55:13 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:19.327 09:55:13 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:19.585 09:55:13 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:19.585 09:55:13 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:19.585 09:55:13 -- ftl/common.sh@48 -- # get_bdev_size b19eb64c-6595-421e-b532-33e79dda234d 00:17:19.585 09:55:13 -- common/autotest_common.sh@1357 -- # local bdev_name=b19eb64c-6595-421e-b532-33e79dda234d 00:17:19.585 09:55:13 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:19.585 09:55:13 -- common/autotest_common.sh@1359 -- # local bs 00:17:19.585 09:55:13 -- common/autotest_common.sh@1360 -- # local nb 00:17:19.585 09:55:13 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b19eb64c-6595-421e-b532-33e79dda234d 00:17:19.844 09:55:13 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:19.844 { 00:17:19.844 "name": "b19eb64c-6595-421e-b532-33e79dda234d", 00:17:19.844 "aliases": [ 00:17:19.844 "lvs/nvme0n1p0" 00:17:19.844 ], 00:17:19.844 "product_name": "Logical Volume", 00:17:19.844 "block_size": 4096, 00:17:19.844 "num_blocks": 26476544, 00:17:19.844 "uuid": "b19eb64c-6595-421e-b532-33e79dda234d", 00:17:19.844 "assigned_rate_limits": { 00:17:19.844 "rw_ios_per_sec": 0, 00:17:19.844 "rw_mbytes_per_sec": 0, 00:17:19.844 "r_mbytes_per_sec": 0, 00:17:19.844 "w_mbytes_per_sec": 0 00:17:19.844 }, 00:17:19.844 "claimed": false, 00:17:19.844 "zoned": false, 00:17:19.844 "supported_io_types": { 00:17:19.844 "read": true, 00:17:19.844 "write": true, 00:17:19.844 "unmap": true, 00:17:19.844 "write_zeroes": true, 00:17:19.844 "flush": false, 00:17:19.844 "reset": true, 00:17:19.844 "compare": false, 00:17:19.844 "compare_and_write": false, 00:17:19.844 "abort": false, 00:17:19.844 "nvme_admin": false, 00:17:19.844 "nvme_io": false 00:17:19.844 }, 00:17:19.844 "driver_specific": { 00:17:19.844 "lvol": { 00:17:19.844 "lvol_store_uuid": "d85a82cd-931e-43be-a5d5-485e66c464d4", 00:17:19.844 "base_bdev": "nvme0n1", 00:17:19.844 "thin_provision": true, 00:17:19.844 "snapshot": false, 00:17:19.844 "clone": false, 00:17:19.844 "esnap_clone": false 00:17:19.844 } 00:17:19.844 } 00:17:19.844 } 00:17:19.844 ]' 00:17:19.844 09:55:13 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:20.103 09:55:13 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:20.103 09:55:13 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:20.103 09:55:13 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:20.103 09:55:13 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:20.103 09:55:13 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:20.103 09:55:13 -- ftl/common.sh@48 -- # cache_size=5171 00:17:20.103 09:55:13 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:20.362 09:55:13 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:20.362 09:55:13 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:20.362 09:55:13 -- ftl/trim.sh@47 -- # get_bdev_size b19eb64c-6595-421e-b532-33e79dda234d 00:17:20.362 09:55:13 -- common/autotest_common.sh@1357 -- # local bdev_name=b19eb64c-6595-421e-b532-33e79dda234d 00:17:20.362 09:55:13 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:20.362 09:55:13 -- common/autotest_common.sh@1359 -- # local bs 00:17:20.362 09:55:13 -- common/autotest_common.sh@1360 -- # local nb 00:17:20.362 09:55:13 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b19eb64c-6595-421e-b532-33e79dda234d 00:17:20.620 09:55:14 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:20.620 { 00:17:20.620 "name": "b19eb64c-6595-421e-b532-33e79dda234d", 00:17:20.620 "aliases": [ 00:17:20.620 "lvs/nvme0n1p0" 00:17:20.620 ], 00:17:20.620 "product_name": "Logical Volume", 00:17:20.620 "block_size": 4096, 00:17:20.620 "num_blocks": 26476544, 00:17:20.620 "uuid": "b19eb64c-6595-421e-b532-33e79dda234d", 00:17:20.620 "assigned_rate_limits": { 00:17:20.620 "rw_ios_per_sec": 0, 00:17:20.620 "rw_mbytes_per_sec": 0, 00:17:20.620 "r_mbytes_per_sec": 0, 00:17:20.620 "w_mbytes_per_sec": 0 00:17:20.620 }, 00:17:20.620 "claimed": false, 00:17:20.620 "zoned": false, 00:17:20.620 "supported_io_types": { 00:17:20.620 "read": true, 00:17:20.620 "write": true, 00:17:20.620 "unmap": true, 00:17:20.620 "write_zeroes": true, 00:17:20.620 "flush": false, 00:17:20.620 "reset": true, 00:17:20.620 "compare": false, 00:17:20.620 "compare_and_write": false, 00:17:20.620 "abort": false, 00:17:20.620 "nvme_admin": false, 00:17:20.620 "nvme_io": false 00:17:20.620 }, 00:17:20.620 "driver_specific": { 00:17:20.620 "lvol": { 00:17:20.620 "lvol_store_uuid": "d85a82cd-931e-43be-a5d5-485e66c464d4", 00:17:20.620 "base_bdev": "nvme0n1", 00:17:20.620 "thin_provision": true, 00:17:20.620 "snapshot": false, 00:17:20.620 "clone": false, 00:17:20.620 "esnap_clone": false 00:17:20.620 } 00:17:20.620 } 00:17:20.621 } 00:17:20.621 ]' 00:17:20.621 09:55:14 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:20.621 09:55:14 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:20.621 09:55:14 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:20.621 09:55:14 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:20.621 09:55:14 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:20.621 09:55:14 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:20.621 09:55:14 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:20.621 09:55:14 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b19eb64c-6595-421e-b532-33e79dda234d -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:20.881 [2024-06-10 09:55:14.480033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.480695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:20.881 [2024-06-10 09:55:14.480831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:20.881 [2024-06-10 09:55:14.480916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.484413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.484543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.881 [2024-06-10 09:55:14.484641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:17:20.881 [2024-06-10 09:55:14.484721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.484964] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:20.881 [2024-06-10 09:55:14.486006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:20.881 [2024-06-10 09:55:14.486151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.486233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.881 [2024-06-10 09:55:14.486310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:17:20.881 [2024-06-10 09:55:14.486388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.486702] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 22a6fe3e-092b-45c9-bec4-df5d368748af 00:17:20.881 [2024-06-10 09:55:14.487816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.487861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:20.881 [2024-06-10 09:55:14.487879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:20.881 [2024-06-10 09:55:14.487893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.492422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.492472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.881 [2024-06-10 09:55:14.492488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.431 ms 00:17:20.881 [2024-06-10 09:55:14.492503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.492680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.492706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.881 [2024-06-10 09:55:14.492720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:20.881 [2024-06-10 09:55:14.492738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.492784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.492801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:20.881 [2024-06-10 09:55:14.492816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:20.881 [2024-06-10 09:55:14.492829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.492874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:20.881 [2024-06-10 09:55:14.497351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.497385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.881 [2024-06-10 09:55:14.497404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.485 ms 00:17:20.881 [2024-06-10 09:55:14.497415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.497497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.497515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:20.881 [2024-06-10 09:55:14.497530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:20.881 [2024-06-10 09:55:14.497541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.497582] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:20.881 [2024-06-10 09:55:14.497733] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:20.881 [2024-06-10 09:55:14.497757] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:20.881 [2024-06-10 09:55:14.497772] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:20.881 [2024-06-10 09:55:14.497789] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:20.881 [2024-06-10 09:55:14.497802] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:20.881 [2024-06-10 09:55:14.497816] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:20.881 [2024-06-10 09:55:14.497827] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:20.881 [2024-06-10 09:55:14.497844] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:20.881 [2024-06-10 09:55:14.497855] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:20.881 [2024-06-10 09:55:14.497868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.497879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:20.881 [2024-06-10 09:55:14.497893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:20.881 [2024-06-10 09:55:14.497904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.498005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.881 [2024-06-10 09:55:14.498021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:20.881 [2024-06-10 09:55:14.498035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:20.881 [2024-06-10 09:55:14.498046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.881 [2024-06-10 09:55:14.498178] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:20.881 [2024-06-10 09:55:14.498196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:20.881 [2024-06-10 09:55:14.498210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:20.881 [2024-06-10 09:55:14.498222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.881 [2024-06-10 09:55:14.498236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:20.881 [2024-06-10 09:55:14.498246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:20.881 [2024-06-10 09:55:14.498258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:20.881 [2024-06-10 09:55:14.498274] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:20.881 [2024-06-10 09:55:14.498287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:20.881 [2024-06-10 09:55:14.498298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:20.881 [2024-06-10 09:55:14.498310] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:20.881 [2024-06-10 09:55:14.498320] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:20.881 [2024-06-10 09:55:14.498334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:20.881 [2024-06-10 09:55:14.498345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:20.881 [2024-06-10 09:55:14.498357] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:20.881 [2024-06-10 09:55:14.498367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.881 [2024-06-10 09:55:14.498381] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:20.881 [2024-06-10 09:55:14.498391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:20.881 [2024-06-10 09:55:14.498403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.881 [2024-06-10 09:55:14.498413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:20.881 [2024-06-10 09:55:14.498424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:20.881 [2024-06-10 09:55:14.498434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:20.881 [2024-06-10 09:55:14.498446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:20.881 [2024-06-10 09:55:14.498456] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:20.881 [2024-06-10 09:55:14.498468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:20.881 [2024-06-10 09:55:14.498478] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:20.882 [2024-06-10 09:55:14.498489] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:20.882 [2024-06-10 09:55:14.498499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:20.882 [2024-06-10 09:55:14.498511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:20.882 [2024-06-10 09:55:14.498521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:20.882 [2024-06-10 09:55:14.498533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:20.882 [2024-06-10 09:55:14.498543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:20.882 [2024-06-10 09:55:14.498557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:20.882 [2024-06-10 09:55:14.498567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:20.882 [2024-06-10 09:55:14.498579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:20.882 [2024-06-10 09:55:14.498589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:20.882 [2024-06-10 09:55:14.498601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:20.882 [2024-06-10 09:55:14.498611] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:20.882 [2024-06-10 09:55:14.498624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:20.882 [2024-06-10 09:55:14.498640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:20.882 [2024-06-10 09:55:14.498652] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:20.882 [2024-06-10 09:55:14.498663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:20.882 [2024-06-10 09:55:14.498676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:20.882 [2024-06-10 09:55:14.498687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.882 [2024-06-10 09:55:14.498700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:20.882 [2024-06-10 09:55:14.498710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:20.882 [2024-06-10 09:55:14.498722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:20.882 [2024-06-10 09:55:14.498732] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:20.882 [2024-06-10 09:55:14.498746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:20.882 [2024-06-10 09:55:14.498756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:20.882 [2024-06-10 09:55:14.498770] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:20.882 [2024-06-10 09:55:14.498783] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:20.882 [2024-06-10 09:55:14.498800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:20.882 [2024-06-10 09:55:14.498811] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:20.882 [2024-06-10 09:55:14.498824] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:20.882 [2024-06-10 09:55:14.498835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:20.882 [2024-06-10 09:55:14.498848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:20.882 [2024-06-10 09:55:14.498859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:20.882 [2024-06-10 09:55:14.498871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:20.882 [2024-06-10 09:55:14.498882] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:20.882 [2024-06-10 09:55:14.498895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:20.882 [2024-06-10 09:55:14.498906] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:20.882 [2024-06-10 09:55:14.498920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:20.882 [2024-06-10 09:55:14.498932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:20.882 [2024-06-10 09:55:14.498949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:20.882 [2024-06-10 09:55:14.498960] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:20.882 [2024-06-10 09:55:14.498974] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:20.882 [2024-06-10 09:55:14.498986] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:20.882 [2024-06-10 09:55:14.498999] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:20.882 [2024-06-10 09:55:14.499010] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:20.882 [2024-06-10 09:55:14.499023] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:20.882 [2024-06-10 09:55:14.499038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.882 [2024-06-10 09:55:14.499053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:20.882 [2024-06-10 09:55:14.499064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:17:20.882 [2024-06-10 09:55:14.499077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.882 [2024-06-10 09:55:14.516890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.882 [2024-06-10 09:55:14.516958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.882 [2024-06-10 09:55:14.516981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.708 ms 00:17:20.882 [2024-06-10 09:55:14.516994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.882 [2024-06-10 09:55:14.517178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.882 [2024-06-10 09:55:14.517205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:20.882 [2024-06-10 09:55:14.517219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:20.882 [2024-06-10 09:55:14.517232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.882 [2024-06-10 09:55:14.556677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.882 [2024-06-10 09:55:14.556738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.882 [2024-06-10 09:55:14.556757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.402 ms 00:17:20.882 [2024-06-10 09:55:14.556771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.882 [2024-06-10 09:55:14.556887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.882 [2024-06-10 09:55:14.556928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.882 [2024-06-10 09:55:14.556957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:20.882 [2024-06-10 09:55:14.556970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.882 [2024-06-10 09:55:14.557308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.882 [2024-06-10 09:55:14.557335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.882 [2024-06-10 09:55:14.557349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:20.882 [2024-06-10 09:55:14.557362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.882 [2024-06-10 09:55:14.557499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.882 [2024-06-10 09:55:14.557518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.882 [2024-06-10 09:55:14.557531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:20.882 [2024-06-10 09:55:14.557544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.882 [2024-06-10 09:55:14.582951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.882 [2024-06-10 09:55:14.583009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.882 [2024-06-10 09:55:14.583030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.368 ms 00:17:20.882 [2024-06-10 09:55:14.583044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.882 [2024-06-10 09:55:14.596499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:20.882 [2024-06-10 09:55:14.610237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.882 [2024-06-10 09:55:14.610307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:20.882 [2024-06-10 09:55:14.610331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.977 ms 00:17:20.882 [2024-06-10 09:55:14.610343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.141 [2024-06-10 09:55:14.679028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.141 [2024-06-10 09:55:14.679098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:21.141 [2024-06-10 09:55:14.679131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.554 ms 00:17:21.141 [2024-06-10 09:55:14.679145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.141 [2024-06-10 09:55:14.679258] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:21.141 [2024-06-10 09:55:14.679282] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:23.128 [2024-06-10 09:55:16.794708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.128 [2024-06-10 09:55:16.794774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:23.128 [2024-06-10 09:55:16.794800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2115.458 ms 00:17:23.128 [2024-06-10 09:55:16.794812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.128 [2024-06-10 09:55:16.795126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.128 [2024-06-10 09:55:16.795148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.128 [2024-06-10 09:55:16.795165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:23.128 [2024-06-10 09:55:16.795180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.128 [2024-06-10 09:55:16.826487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.128 [2024-06-10 09:55:16.826547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:23.128 [2024-06-10 09:55:16.826570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.257 ms 00:17:23.128 [2024-06-10 09:55:16.826583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.128 [2024-06-10 09:55:16.861064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.128 [2024-06-10 09:55:16.861165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:23.128 [2024-06-10 09:55:16.861194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.353 ms 00:17:23.128 [2024-06-10 09:55:16.861206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.128 [2024-06-10 09:55:16.861685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.128 [2024-06-10 09:55:16.861725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.128 [2024-06-10 09:55:16.861745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:23.128 [2024-06-10 09:55:16.861759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.387 [2024-06-10 09:55:16.940896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.387 [2024-06-10 09:55:16.940995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:23.387 [2024-06-10 09:55:16.941022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.073 ms 00:17:23.387 [2024-06-10 09:55:16.941035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.387 [2024-06-10 09:55:16.974817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.387 [2024-06-10 09:55:16.974881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:23.387 [2024-06-10 09:55:16.974909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.570 ms 00:17:23.387 [2024-06-10 09:55:16.974922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.387 [2024-06-10 09:55:16.979009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.387 [2024-06-10 09:55:16.979065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:23.387 [2024-06-10 09:55:16.979087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:17:23.387 [2024-06-10 09:55:16.979099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.387 [2024-06-10 09:55:17.012175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.387 [2024-06-10 09:55:17.012283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.387 [2024-06-10 09:55:17.012306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.986 ms 00:17:23.387 [2024-06-10 09:55:17.012318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.387 [2024-06-10 09:55:17.012485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.387 [2024-06-10 09:55:17.012505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.387 [2024-06-10 09:55:17.012520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:23.387 [2024-06-10 09:55:17.012531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.387 [2024-06-10 09:55:17.012642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.387 [2024-06-10 09:55:17.012661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.387 [2024-06-10 09:55:17.012675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:23.387 [2024-06-10 09:55:17.012686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.387 [2024-06-10 09:55:17.013744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.387 [2024-06-10 09:55:17.018195] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2533.340 ms, result 0 00:17:23.387 [2024-06-10 09:55:17.019335] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.387 { 00:17:23.387 "name": "ftl0", 00:17:23.387 "uuid": "22a6fe3e-092b-45c9-bec4-df5d368748af" 00:17:23.387 } 00:17:23.387 09:55:17 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:23.387 09:55:17 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:17:23.387 09:55:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:23.387 09:55:17 -- common/autotest_common.sh@889 -- # local i 00:17:23.387 09:55:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:23.387 09:55:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:23.387 09:55:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:23.646 09:55:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:23.904 [ 00:17:23.904 { 00:17:23.904 "name": "ftl0", 00:17:23.904 "aliases": [ 00:17:23.904 "22a6fe3e-092b-45c9-bec4-df5d368748af" 00:17:23.904 ], 00:17:23.904 "product_name": "FTL disk", 00:17:23.904 "block_size": 4096, 00:17:23.904 "num_blocks": 23592960, 00:17:23.904 "uuid": "22a6fe3e-092b-45c9-bec4-df5d368748af", 00:17:23.904 "assigned_rate_limits": { 00:17:23.904 "rw_ios_per_sec": 0, 00:17:23.904 "rw_mbytes_per_sec": 0, 00:17:23.904 "r_mbytes_per_sec": 0, 00:17:23.904 "w_mbytes_per_sec": 0 00:17:23.904 }, 00:17:23.904 "claimed": false, 00:17:23.905 "zoned": false, 00:17:23.905 "supported_io_types": { 00:17:23.905 "read": true, 00:17:23.905 "write": true, 00:17:23.905 "unmap": true, 00:17:23.905 "write_zeroes": true, 00:17:23.905 "flush": true, 00:17:23.905 "reset": false, 00:17:23.905 "compare": false, 00:17:23.905 "compare_and_write": false, 00:17:23.905 "abort": false, 00:17:23.905 "nvme_admin": false, 00:17:23.905 "nvme_io": false 00:17:23.905 }, 00:17:23.905 "driver_specific": { 00:17:23.905 "ftl": { 00:17:23.905 "base_bdev": "b19eb64c-6595-421e-b532-33e79dda234d", 00:17:23.905 "cache": "nvc0n1p0" 00:17:23.905 } 00:17:23.905 } 00:17:23.905 } 00:17:23.905 ] 00:17:23.905 09:55:17 -- common/autotest_common.sh@895 -- # return 0 00:17:23.905 09:55:17 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:23.905 09:55:17 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:24.163 09:55:17 -- ftl/trim.sh@56 -- # echo ']}' 00:17:24.163 09:55:17 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:24.422 09:55:18 -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:24.422 { 00:17:24.422 "name": "ftl0", 00:17:24.422 "aliases": [ 00:17:24.422 "22a6fe3e-092b-45c9-bec4-df5d368748af" 00:17:24.422 ], 00:17:24.422 "product_name": "FTL disk", 00:17:24.422 "block_size": 4096, 00:17:24.422 "num_blocks": 23592960, 00:17:24.422 "uuid": "22a6fe3e-092b-45c9-bec4-df5d368748af", 00:17:24.422 "assigned_rate_limits": { 00:17:24.422 "rw_ios_per_sec": 0, 00:17:24.422 "rw_mbytes_per_sec": 0, 00:17:24.422 "r_mbytes_per_sec": 0, 00:17:24.422 "w_mbytes_per_sec": 0 00:17:24.422 }, 00:17:24.422 "claimed": false, 00:17:24.422 "zoned": false, 00:17:24.422 "supported_io_types": { 00:17:24.422 "read": true, 00:17:24.422 "write": true, 00:17:24.422 "unmap": true, 00:17:24.422 "write_zeroes": true, 00:17:24.422 "flush": true, 00:17:24.422 "reset": false, 00:17:24.422 "compare": false, 00:17:24.422 "compare_and_write": false, 00:17:24.422 "abort": false, 00:17:24.422 "nvme_admin": false, 00:17:24.422 "nvme_io": false 00:17:24.422 }, 00:17:24.422 "driver_specific": { 00:17:24.422 "ftl": { 00:17:24.422 "base_bdev": "b19eb64c-6595-421e-b532-33e79dda234d", 00:17:24.422 "cache": "nvc0n1p0" 00:17:24.422 } 00:17:24.422 } 00:17:24.422 } 00:17:24.422 ]' 00:17:24.422 09:55:18 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:24.422 09:55:18 -- ftl/trim.sh@60 -- # nb=23592960 00:17:24.422 09:55:18 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:24.680 [2024-06-10 09:55:18.254967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.680 [2024-06-10 09:55:18.255035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:24.680 [2024-06-10 09:55:18.255058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:24.680 [2024-06-10 09:55:18.255072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.255138] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:24.681 [2024-06-10 09:55:18.258628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.258662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:24.681 [2024-06-10 09:55:18.258682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.458 ms 00:17:24.681 [2024-06-10 09:55:18.258693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.259332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.259369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:24.681 [2024-06-10 09:55:18.259391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:17:24.681 [2024-06-10 09:55:18.259402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.263327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.263369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:24.681 [2024-06-10 09:55:18.263391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.882 ms 00:17:24.681 [2024-06-10 09:55:18.263403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.271314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.271357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:24.681 [2024-06-10 09:55:18.271377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.837 ms 00:17:24.681 [2024-06-10 09:55:18.271390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.303378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.303427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:24.681 [2024-06-10 09:55:18.303449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.877 ms 00:17:24.681 [2024-06-10 09:55:18.303462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.323659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.323705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:24.681 [2024-06-10 09:55:18.323726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.093 ms 00:17:24.681 [2024-06-10 09:55:18.323739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.324006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.324041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:24.681 [2024-06-10 09:55:18.324074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:17:24.681 [2024-06-10 09:55:18.324088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.356809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.356848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:24.681 [2024-06-10 09:55:18.356894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.683 ms 00:17:24.681 [2024-06-10 09:55:18.356920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.389202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.389241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:24.681 [2024-06-10 09:55:18.389262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.190 ms 00:17:24.681 [2024-06-10 09:55:18.389273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.681 [2024-06-10 09:55:18.421680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.681 [2024-06-10 09:55:18.421738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:24.681 [2024-06-10 09:55:18.421758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.297 ms 00:17:24.681 [2024-06-10 09:55:18.421770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.941 [2024-06-10 09:55:18.453630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.941 [2024-06-10 09:55:18.453666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:24.941 [2024-06-10 09:55:18.453688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.695 ms 00:17:24.941 [2024-06-10 09:55:18.453716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.941 [2024-06-10 09:55:18.453814] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:24.941 [2024-06-10 09:55:18.453840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.453989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:24.941 [2024-06-10 09:55:18.454300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.454999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:24.942 [2024-06-10 09:55:18.455201] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:24.942 [2024-06-10 09:55:18.455214] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22a6fe3e-092b-45c9-bec4-df5d368748af 00:17:24.942 [2024-06-10 09:55:18.455226] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:24.942 [2024-06-10 09:55:18.455239] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:24.942 [2024-06-10 09:55:18.455250] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:24.942 [2024-06-10 09:55:18.455263] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:24.942 [2024-06-10 09:55:18.455274] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:24.942 [2024-06-10 09:55:18.455288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:24.942 [2024-06-10 09:55:18.455299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:24.942 [2024-06-10 09:55:18.455313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:24.943 [2024-06-10 09:55:18.455324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:24.943 [2024-06-10 09:55:18.455338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.943 [2024-06-10 09:55:18.455365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:24.943 [2024-06-10 09:55:18.455382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.528 ms 00:17:24.943 [2024-06-10 09:55:18.455396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.472076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.943 [2024-06-10 09:55:18.472155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:24.943 [2024-06-10 09:55:18.472176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.633 ms 00:17:24.943 [2024-06-10 09:55:18.472189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.472483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.943 [2024-06-10 09:55:18.472508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:24.943 [2024-06-10 09:55:18.472524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:17:24.943 [2024-06-10 09:55:18.472536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.530809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.530876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.943 [2024-06-10 09:55:18.530900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.530912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.531072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.531094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.943 [2024-06-10 09:55:18.531120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.531134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.531225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.531244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.943 [2024-06-10 09:55:18.531259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.531271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.531312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.531326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.943 [2024-06-10 09:55:18.531339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.531365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.650990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.651044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.943 [2024-06-10 09:55:18.651069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.651081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.690326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.690382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.943 [2024-06-10 09:55:18.690406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.690418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.690533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.690552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.943 [2024-06-10 09:55:18.690566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.690577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.690639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.690652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.943 [2024-06-10 09:55:18.690666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.690677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.690877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.690902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.943 [2024-06-10 09:55:18.690921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.690933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.691034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.691052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.943 [2024-06-10 09:55:18.691066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.691078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.691159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.691181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.943 [2024-06-10 09:55:18.691197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.691208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.691282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.943 [2024-06-10 09:55:18.691298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.943 [2024-06-10 09:55:18.691312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.943 [2024-06-10 09:55:18.691323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.943 [2024-06-10 09:55:18.691557] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.570 ms, result 0 00:17:24.943 true 00:17:25.202 09:55:18 -- ftl/trim.sh@63 -- # killprocess 73312 00:17:25.202 09:55:18 -- common/autotest_common.sh@926 -- # '[' -z 73312 ']' 00:17:25.202 09:55:18 -- common/autotest_common.sh@930 -- # kill -0 73312 00:17:25.202 09:55:18 -- common/autotest_common.sh@931 -- # uname 00:17:25.202 09:55:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:25.202 09:55:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73312 00:17:25.202 09:55:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:25.202 09:55:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:25.202 killing process with pid 73312 00:17:25.202 09:55:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73312' 00:17:25.202 09:55:18 -- common/autotest_common.sh@945 -- # kill 73312 00:17:25.202 09:55:18 -- common/autotest_common.sh@950 -- # wait 73312 00:17:29.395 09:55:23 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:30.772 65536+0 records in 00:17:30.772 65536+0 records out 00:17:30.772 268435456 bytes (268 MB, 256 MiB) copied, 1.13054 s, 237 MB/s 00:17:30.772 09:55:24 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.772 [2024-06-10 09:55:24.364251] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:30.772 [2024-06-10 09:55:24.364405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73546 ] 00:17:30.772 [2024-06-10 09:55:24.526549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.031 [2024-06-10 09:55:24.728793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.290 [2024-06-10 09:55:25.014021] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.290 [2024-06-10 09:55:25.014128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.561 [2024-06-10 09:55:25.170596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.561 [2024-06-10 09:55:25.170683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:31.561 [2024-06-10 09:55:25.170719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:31.561 [2024-06-10 09:55:25.170735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.561 [2024-06-10 09:55:25.173888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.561 [2024-06-10 09:55:25.173945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.561 [2024-06-10 09:55:25.173977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.125 ms 00:17:31.561 [2024-06-10 09:55:25.173992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.561 [2024-06-10 09:55:25.174124] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:31.561 [2024-06-10 09:55:25.175069] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:31.561 [2024-06-10 09:55:25.175131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.561 [2024-06-10 09:55:25.175151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.561 [2024-06-10 09:55:25.175164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:17:31.561 [2024-06-10 09:55:25.175175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.561 [2024-06-10 09:55:25.176339] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:31.561 [2024-06-10 09:55:25.192070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.561 [2024-06-10 09:55:25.192131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:31.561 [2024-06-10 09:55:25.192166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.732 ms 00:17:31.561 [2024-06-10 09:55:25.192193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.561 [2024-06-10 09:55:25.192310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.561 [2024-06-10 09:55:25.192331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:31.561 [2024-06-10 09:55:25.192346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:31.561 [2024-06-10 09:55:25.192356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.561 [2024-06-10 09:55:25.196842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.561 [2024-06-10 09:55:25.196897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.561 [2024-06-10 09:55:25.196928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.416 ms 00:17:31.561 [2024-06-10 09:55:25.196939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.562 [2024-06-10 09:55:25.197124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.562 [2024-06-10 09:55:25.197148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.562 [2024-06-10 09:55:25.197161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:17:31.562 [2024-06-10 09:55:25.197194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.562 [2024-06-10 09:55:25.197236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.562 [2024-06-10 09:55:25.197251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:31.562 [2024-06-10 09:55:25.197263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:31.562 [2024-06-10 09:55:25.197286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.562 [2024-06-10 09:55:25.197324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:31.562 [2024-06-10 09:55:25.201539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.562 [2024-06-10 09:55:25.201589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.562 [2024-06-10 09:55:25.201619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.230 ms 00:17:31.562 [2024-06-10 09:55:25.201629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.562 [2024-06-10 09:55:25.201711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.562 [2024-06-10 09:55:25.201732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:31.562 [2024-06-10 09:55:25.201744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:31.562 [2024-06-10 09:55:25.201754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.562 [2024-06-10 09:55:25.201785] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:31.562 [2024-06-10 09:55:25.201811] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:31.562 [2024-06-10 09:55:25.201866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:31.562 [2024-06-10 09:55:25.201884] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:31.562 [2024-06-10 09:55:25.201966] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:31.562 [2024-06-10 09:55:25.201981] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:31.562 [2024-06-10 09:55:25.201995] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:31.562 [2024-06-10 09:55:25.202009] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202021] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202032] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:31.562 [2024-06-10 09:55:25.202042] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:31.562 [2024-06-10 09:55:25.202052] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:31.562 [2024-06-10 09:55:25.202077] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:31.562 [2024-06-10 09:55:25.202087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.562 [2024-06-10 09:55:25.202101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:31.562 [2024-06-10 09:55:25.202111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:31.562 [2024-06-10 09:55:25.202121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.562 [2024-06-10 09:55:25.202210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.562 [2024-06-10 09:55:25.202227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:31.562 [2024-06-10 09:55:25.202239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:31.562 [2024-06-10 09:55:25.202248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.562 [2024-06-10 09:55:25.202351] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:31.562 [2024-06-10 09:55:25.202378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:31.562 [2024-06-10 09:55:25.202391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:31.562 [2024-06-10 09:55:25.202428] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:31.562 [2024-06-10 09:55:25.202458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.562 [2024-06-10 09:55:25.202476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:31.562 [2024-06-10 09:55:25.202485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:31.562 [2024-06-10 09:55:25.202495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.562 [2024-06-10 09:55:25.202504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:31.562 [2024-06-10 09:55:25.202514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:31.562 [2024-06-10 09:55:25.202522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202531] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:31.562 [2024-06-10 09:55:25.202541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:31.562 [2024-06-10 09:55:25.202565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:31.562 [2024-06-10 09:55:25.202611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:31.562 [2024-06-10 09:55:25.202621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:31.562 [2024-06-10 09:55:25.202640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:31.562 [2024-06-10 09:55:25.202667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:31.562 [2024-06-10 09:55:25.202699] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202733] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:31.562 [2024-06-10 09:55:25.202743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202762] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:31.562 [2024-06-10 09:55:25.202771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.562 [2024-06-10 09:55:25.202810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:31.562 [2024-06-10 09:55:25.202821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:31.562 [2024-06-10 09:55:25.202830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.562 [2024-06-10 09:55:25.202840] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:31.562 [2024-06-10 09:55:25.202851] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:31.562 [2024-06-10 09:55:25.202861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.562 [2024-06-10 09:55:25.202883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:31.562 [2024-06-10 09:55:25.202893] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:31.562 [2024-06-10 09:55:25.202903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:31.562 [2024-06-10 09:55:25.202913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:31.562 [2024-06-10 09:55:25.202923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:31.562 [2024-06-10 09:55:25.202933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:31.562 [2024-06-10 09:55:25.202944] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:31.562 [2024-06-10 09:55:25.202962] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.562 [2024-06-10 09:55:25.202974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:31.562 [2024-06-10 09:55:25.202985] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:31.562 [2024-06-10 09:55:25.202996] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:31.562 [2024-06-10 09:55:25.203007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:31.562 [2024-06-10 09:55:25.203019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:31.562 [2024-06-10 09:55:25.203029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:31.562 [2024-06-10 09:55:25.203040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:31.562 [2024-06-10 09:55:25.203051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:31.562 [2024-06-10 09:55:25.203062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:31.562 [2024-06-10 09:55:25.203088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:31.562 [2024-06-10 09:55:25.203098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:31.563 [2024-06-10 09:55:25.203109] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:31.563 [2024-06-10 09:55:25.203120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:31.563 [2024-06-10 09:55:25.203130] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:31.563 [2024-06-10 09:55:25.203157] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.563 [2024-06-10 09:55:25.203168] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:31.563 [2024-06-10 09:55:25.203178] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:31.563 [2024-06-10 09:55:25.203205] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:31.563 [2024-06-10 09:55:25.203231] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:31.563 [2024-06-10 09:55:25.203244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.563 [2024-06-10 09:55:25.203261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:31.563 [2024-06-10 09:55:25.203272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:17:31.563 [2024-06-10 09:55:25.203282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.563 [2024-06-10 09:55:25.220582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.563 [2024-06-10 09:55:25.220649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.563 [2024-06-10 09:55:25.220665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.237 ms 00:17:31.563 [2024-06-10 09:55:25.220676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.563 [2024-06-10 09:55:25.220832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.563 [2024-06-10 09:55:25.220849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.563 [2024-06-10 09:55:25.220860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:31.563 [2024-06-10 09:55:25.220886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.563 [2024-06-10 09:55:25.274434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.563 [2024-06-10 09:55:25.274491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.563 [2024-06-10 09:55:25.274510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.500 ms 00:17:31.563 [2024-06-10 09:55:25.274522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.563 [2024-06-10 09:55:25.274684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.563 [2024-06-10 09:55:25.274701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.563 [2024-06-10 09:55:25.274714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.563 [2024-06-10 09:55:25.274724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.563 [2024-06-10 09:55:25.275069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.563 [2024-06-10 09:55:25.275099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.563 [2024-06-10 09:55:25.275112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:31.563 [2024-06-10 09:55:25.275125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.563 [2024-06-10 09:55:25.275310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.563 [2024-06-10 09:55:25.275337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.563 [2024-06-10 09:55:25.275366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:17:31.563 [2024-06-10 09:55:25.275382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.563 [2024-06-10 09:55:25.293128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.563 [2024-06-10 09:55:25.293183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.563 [2024-06-10 09:55:25.293204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.696 ms 00:17:31.563 [2024-06-10 09:55:25.293216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.563 [2024-06-10 09:55:25.310487] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:31.563 [2024-06-10 09:55:25.310528] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:31.563 [2024-06-10 09:55:25.310565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.563 [2024-06-10 09:55:25.310577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:31.563 [2024-06-10 09:55:25.310590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.160 ms 00:17:31.563 [2024-06-10 09:55:25.310600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.341200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.341291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:31.822 [2024-06-10 09:55:25.341311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.499 ms 00:17:31.822 [2024-06-10 09:55:25.341324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.358901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.358966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:31.822 [2024-06-10 09:55:25.359014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.405 ms 00:17:31.822 [2024-06-10 09:55:25.359025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.374520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.374560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:31.822 [2024-06-10 09:55:25.374591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.373 ms 00:17:31.822 [2024-06-10 09:55:25.374602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.375118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.375165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.822 [2024-06-10 09:55:25.375180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:17:31.822 [2024-06-10 09:55:25.375191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.448793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.448864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:31.822 [2024-06-10 09:55:25.448898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.567 ms 00:17:31.822 [2024-06-10 09:55:25.448910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.460612] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:31.822 [2024-06-10 09:55:25.473628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.473722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:31.822 [2024-06-10 09:55:25.473758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.579 ms 00:17:31.822 [2024-06-10 09:55:25.473769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.473906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.473925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:31.822 [2024-06-10 09:55:25.473937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:31.822 [2024-06-10 09:55:25.473948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.474048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.474065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.822 [2024-06-10 09:55:25.474082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:31.822 [2024-06-10 09:55:25.474092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.476073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.476138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:31.822 [2024-06-10 09:55:25.476185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:17:31.822 [2024-06-10 09:55:25.476213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.476252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.476266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.822 [2024-06-10 09:55:25.476278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:31.822 [2024-06-10 09:55:25.476295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.476336] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:31.822 [2024-06-10 09:55:25.476350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.476361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:31.822 [2024-06-10 09:55:25.476372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:31.822 [2024-06-10 09:55:25.476383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.505089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.505152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.822 [2024-06-10 09:55:25.505190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.675 ms 00:17:31.822 [2024-06-10 09:55:25.505201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.505317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.822 [2024-06-10 09:55:25.505369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.822 [2024-06-10 09:55:25.505382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:31.822 [2024-06-10 09:55:25.505393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.822 [2024-06-10 09:55:25.506503] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.822 [2024-06-10 09:55:25.510462] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 335.547 ms, result 0 00:17:31.822 [2024-06-10 09:55:25.511320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.822 [2024-06-10 09:55:25.527028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:43.059  Copying: 24/256 [MB] (24 MBps) Copying: 48/256 [MB] (23 MBps) Copying: 71/256 [MB] (23 MBps) Copying: 94/256 [MB] (22 MBps) Copying: 117/256 [MB] (23 MBps) Copying: 140/256 [MB] (22 MBps) Copying: 162/256 [MB] (22 MBps) Copying: 186/256 [MB] (23 MBps) Copying: 208/256 [MB] (21 MBps) Copying: 232/256 [MB] (24 MBps) Copying: 255/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-06-10 09:55:36.570488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:43.059 [2024-06-10 09:55:36.582137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.582207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:43.059 [2024-06-10 09:55:36.582243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:43.059 [2024-06-10 09:55:36.582260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.582289] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:43.059 [2024-06-10 09:55:36.585451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.585497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:43.059 [2024-06-10 09:55:36.585526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.144 ms 00:17:43.059 [2024-06-10 09:55:36.585537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.587421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.587462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:43.059 [2024-06-10 09:55:36.587478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:17:43.059 [2024-06-10 09:55:36.587489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.594421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.594492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:43.059 [2024-06-10 09:55:36.594523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:17:43.059 [2024-06-10 09:55:36.594534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.601318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.601364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:43.059 [2024-06-10 09:55:36.601392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.702 ms 00:17:43.059 [2024-06-10 09:55:36.601403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.632086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.632187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:43.059 [2024-06-10 09:55:36.632206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.615 ms 00:17:43.059 [2024-06-10 09:55:36.632219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.650239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.650317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:43.059 [2024-06-10 09:55:36.650362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.919 ms 00:17:43.059 [2024-06-10 09:55:36.650373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.650654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.650675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:43.059 [2024-06-10 09:55:36.650687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:43.059 [2024-06-10 09:55:36.650698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.679079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.679160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:43.059 [2024-06-10 09:55:36.679192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.358 ms 00:17:43.059 [2024-06-10 09:55:36.679216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.706969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.707021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:43.059 [2024-06-10 09:55:36.707051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.677 ms 00:17:43.059 [2024-06-10 09:55:36.707061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.059 [2024-06-10 09:55:36.733770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.059 [2024-06-10 09:55:36.733844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:43.059 [2024-06-10 09:55:36.733876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.626 ms 00:17:43.059 [2024-06-10 09:55:36.733887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.060 [2024-06-10 09:55:36.762203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.060 [2024-06-10 09:55:36.762293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:43.060 [2024-06-10 09:55:36.762326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.172 ms 00:17:43.060 [2024-06-10 09:55:36.762336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.060 [2024-06-10 09:55:36.762414] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:43.060 [2024-06-10 09:55:36.762440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.762999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:43.060 [2024-06-10 09:55:36.763425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:43.061 [2024-06-10 09:55:36.763620] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:43.061 [2024-06-10 09:55:36.763631] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22a6fe3e-092b-45c9-bec4-df5d368748af 00:17:43.061 [2024-06-10 09:55:36.763656] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:43.061 [2024-06-10 09:55:36.763682] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:43.061 [2024-06-10 09:55:36.763693] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:43.061 [2024-06-10 09:55:36.763704] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:43.061 [2024-06-10 09:55:36.763713] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:43.061 [2024-06-10 09:55:36.763724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:43.061 [2024-06-10 09:55:36.763749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:43.061 [2024-06-10 09:55:36.763758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:43.061 [2024-06-10 09:55:36.763767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:43.061 [2024-06-10 09:55:36.763777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.061 [2024-06-10 09:55:36.763793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:43.061 [2024-06-10 09:55:36.763804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:17:43.061 [2024-06-10 09:55:36.763814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.061 [2024-06-10 09:55:36.780361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.061 [2024-06-10 09:55:36.780397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:43.061 [2024-06-10 09:55:36.780412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.521 ms 00:17:43.061 [2024-06-10 09:55:36.780423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.061 [2024-06-10 09:55:36.780736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.061 [2024-06-10 09:55:36.780765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:43.061 [2024-06-10 09:55:36.780779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:43.061 [2024-06-10 09:55:36.780790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.319 [2024-06-10 09:55:36.830418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.319 [2024-06-10 09:55:36.830496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.319 [2024-06-10 09:55:36.830531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.319 [2024-06-10 09:55:36.830543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.319 [2024-06-10 09:55:36.830700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.319 [2024-06-10 09:55:36.830718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.319 [2024-06-10 09:55:36.830730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.319 [2024-06-10 09:55:36.830741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.319 [2024-06-10 09:55:36.830806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.319 [2024-06-10 09:55:36.830822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.319 [2024-06-10 09:55:36.830834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.319 [2024-06-10 09:55:36.830846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.319 [2024-06-10 09:55:36.830870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.319 [2024-06-10 09:55:36.830889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.319 [2024-06-10 09:55:36.830900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.319 [2024-06-10 09:55:36.830911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.319 [2024-06-10 09:55:36.928591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.319 [2024-06-10 09:55:36.928671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.319 [2024-06-10 09:55:36.928721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.319 [2024-06-10 09:55:36.928733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.319 [2024-06-10 09:55:36.965482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.319 [2024-06-10 09:55:36.965559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.319 [2024-06-10 09:55:36.965591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.319 [2024-06-10 09:55:36.965603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.319 [2024-06-10 09:55:36.965705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.319 [2024-06-10 09:55:36.965722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.319 [2024-06-10 09:55:36.965733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.320 [2024-06-10 09:55:36.965744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.320 [2024-06-10 09:55:36.965778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.320 [2024-06-10 09:55:36.965790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.320 [2024-06-10 09:55:36.965810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.320 [2024-06-10 09:55:36.965851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.320 [2024-06-10 09:55:36.965982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.320 [2024-06-10 09:55:36.965999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.320 [2024-06-10 09:55:36.966010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.320 [2024-06-10 09:55:36.966021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.320 [2024-06-10 09:55:36.966067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.320 [2024-06-10 09:55:36.966082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:43.320 [2024-06-10 09:55:36.966093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.320 [2024-06-10 09:55:36.966109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.320 [2024-06-10 09:55:36.966154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.320 [2024-06-10 09:55:36.966187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.320 [2024-06-10 09:55:36.966202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.320 [2024-06-10 09:55:36.966212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.320 [2024-06-10 09:55:36.966267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.320 [2024-06-10 09:55:36.966284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.320 [2024-06-10 09:55:36.966300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.320 [2024-06-10 09:55:36.966315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.320 [2024-06-10 09:55:36.966476] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.341 ms, result 0 00:17:44.723 00:17:44.723 00:17:44.723 09:55:38 -- ftl/trim.sh@72 -- # svcpid=73717 00:17:44.723 09:55:38 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:44.723 09:55:38 -- ftl/trim.sh@73 -- # waitforlisten 73717 00:17:44.723 09:55:38 -- common/autotest_common.sh@819 -- # '[' -z 73717 ']' 00:17:44.723 09:55:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.723 09:55:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.723 09:55:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.723 09:55:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.723 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:17:44.723 [2024-06-10 09:55:38.309755] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:44.723 [2024-06-10 09:55:38.309943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73717 ] 00:17:44.723 [2024-06-10 09:55:38.470469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.981 [2024-06-10 09:55:38.638275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:44.981 [2024-06-10 09:55:38.638535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.355 09:55:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:46.355 09:55:39 -- common/autotest_common.sh@852 -- # return 0 00:17:46.355 09:55:39 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:46.615 [2024-06-10 09:55:40.149777] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.615 [2024-06-10 09:55:40.149886] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.615 [2024-06-10 09:55:40.317809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.317891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:46.615 [2024-06-10 09:55:40.317931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:46.615 [2024-06-10 09:55:40.317943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.321219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.321275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.615 [2024-06-10 09:55:40.321310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:17:46.615 [2024-06-10 09:55:40.321322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.321506] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:46.615 [2024-06-10 09:55:40.322552] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:46.615 [2024-06-10 09:55:40.322604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.322618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.615 [2024-06-10 09:55:40.322632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:17:46.615 [2024-06-10 09:55:40.322644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.324071] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:46.615 [2024-06-10 09:55:40.339047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.339294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:46.615 [2024-06-10 09:55:40.339321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.977 ms 00:17:46.615 [2024-06-10 09:55:40.339336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.339669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.339705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:46.615 [2024-06-10 09:55:40.339721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:46.615 [2024-06-10 09:55:40.339738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.345002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.345080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.615 [2024-06-10 09:55:40.345098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.190 ms 00:17:46.615 [2024-06-10 09:55:40.345162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.345306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.345347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.615 [2024-06-10 09:55:40.345361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:46.615 [2024-06-10 09:55:40.345374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.345412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.345435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:46.615 [2024-06-10 09:55:40.345447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:46.615 [2024-06-10 09:55:40.345460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.345498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:46.615 [2024-06-10 09:55:40.349678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.349727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.615 [2024-06-10 09:55:40.349761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.191 ms 00:17:46.615 [2024-06-10 09:55:40.349772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.349859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.615 [2024-06-10 09:55:40.349891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:46.615 [2024-06-10 09:55:40.349906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:46.615 [2024-06-10 09:55:40.349917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.615 [2024-06-10 09:55:40.349947] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:46.615 [2024-06-10 09:55:40.349976] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:46.615 [2024-06-10 09:55:40.350017] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:46.616 [2024-06-10 09:55:40.350038] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:46.616 [2024-06-10 09:55:40.350122] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:46.616 [2024-06-10 09:55:40.350170] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:46.616 [2024-06-10 09:55:40.350190] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:46.616 [2024-06-10 09:55:40.350205] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:46.616 [2024-06-10 09:55:40.350222] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:46.616 [2024-06-10 09:55:40.350234] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:46.616 [2024-06-10 09:55:40.350247] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:46.616 [2024-06-10 09:55:40.350257] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:46.616 [2024-06-10 09:55:40.350271] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:46.616 [2024-06-10 09:55:40.350283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.616 [2024-06-10 09:55:40.350295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:46.616 [2024-06-10 09:55:40.350307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:17:46.616 [2024-06-10 09:55:40.350319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.616 [2024-06-10 09:55:40.350392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.616 [2024-06-10 09:55:40.350414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:46.616 [2024-06-10 09:55:40.350429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:46.616 [2024-06-10 09:55:40.350443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.616 [2024-06-10 09:55:40.350527] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:46.616 [2024-06-10 09:55:40.350544] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:46.616 [2024-06-10 09:55:40.350557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.616 [2024-06-10 09:55:40.350571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350583] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:46.616 [2024-06-10 09:55:40.350598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:46.616 [2024-06-10 09:55:40.350623] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:46.616 [2024-06-10 09:55:40.350634] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.616 [2024-06-10 09:55:40.350656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:46.616 [2024-06-10 09:55:40.350668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:46.616 [2024-06-10 09:55:40.350677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.616 [2024-06-10 09:55:40.350689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:46.616 [2024-06-10 09:55:40.350700] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:46.616 [2024-06-10 09:55:40.350712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:46.616 [2024-06-10 09:55:40.350734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:46.616 [2024-06-10 09:55:40.350744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:46.616 [2024-06-10 09:55:40.350765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:46.616 [2024-06-10 09:55:40.350777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:46.616 [2024-06-10 09:55:40.350788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:46.616 [2024-06-10 09:55:40.350802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:46.616 [2024-06-10 09:55:40.350824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:46.616 [2024-06-10 09:55:40.350834] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:46.616 [2024-06-10 09:55:40.350857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:46.616 [2024-06-10 09:55:40.350868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:46.616 [2024-06-10 09:55:40.350904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:46.616 [2024-06-10 09:55:40.350915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:46.616 [2024-06-10 09:55:40.350936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:46.616 [2024-06-10 09:55:40.350948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:46.616 [2024-06-10 09:55:40.350958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.616 [2024-06-10 09:55:40.350971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:46.616 [2024-06-10 09:55:40.350982] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:46.616 [2024-06-10 09:55:40.350996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.616 [2024-06-10 09:55:40.351006] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:46.616 [2024-06-10 09:55:40.351020] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:46.616 [2024-06-10 09:55:40.351031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.616 [2024-06-10 09:55:40.351046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.616 [2024-06-10 09:55:40.351058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:46.616 [2024-06-10 09:55:40.351070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:46.616 [2024-06-10 09:55:40.351080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:46.616 [2024-06-10 09:55:40.351092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:46.616 [2024-06-10 09:55:40.351102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:46.616 [2024-06-10 09:55:40.351132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:46.616 [2024-06-10 09:55:40.351145] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:46.616 [2024-06-10 09:55:40.351160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.616 [2024-06-10 09:55:40.351173] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:46.616 [2024-06-10 09:55:40.351186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:46.616 [2024-06-10 09:55:40.351197] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:46.616 [2024-06-10 09:55:40.351213] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:46.616 [2024-06-10 09:55:40.351225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:46.616 [2024-06-10 09:55:40.351238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:46.616 [2024-06-10 09:55:40.351249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:46.616 [2024-06-10 09:55:40.351262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:46.616 [2024-06-10 09:55:40.351273] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:46.616 [2024-06-10 09:55:40.351286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:46.616 [2024-06-10 09:55:40.351297] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:46.616 [2024-06-10 09:55:40.351310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:46.616 [2024-06-10 09:55:40.351321] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:46.616 [2024-06-10 09:55:40.351335] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:46.616 [2024-06-10 09:55:40.351347] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.616 [2024-06-10 09:55:40.351388] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:46.616 [2024-06-10 09:55:40.351401] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:46.616 [2024-06-10 09:55:40.351416] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:46.616 [2024-06-10 09:55:40.351428] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:46.616 [2024-06-10 09:55:40.351445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.616 [2024-06-10 09:55:40.351458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:46.616 [2024-06-10 09:55:40.351472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:17:46.616 [2024-06-10 09:55:40.351484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.616 [2024-06-10 09:55:40.368426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.616 [2024-06-10 09:55:40.368487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.616 [2024-06-10 09:55:40.368523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.876 ms 00:17:46.616 [2024-06-10 09:55:40.368535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.617 [2024-06-10 09:55:40.368705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.617 [2024-06-10 09:55:40.368733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:46.617 [2024-06-10 09:55:40.368759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:46.617 [2024-06-10 09:55:40.368771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.405299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.405372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.875 [2024-06-10 09:55:40.405410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.497 ms 00:17:46.875 [2024-06-10 09:55:40.405422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.405555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.405588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.875 [2024-06-10 09:55:40.405603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:46.875 [2024-06-10 09:55:40.405616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.405943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.405969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.875 [2024-06-10 09:55:40.405987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:17:46.875 [2024-06-10 09:55:40.405999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.406187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.406212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.875 [2024-06-10 09:55:40.406244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:46.875 [2024-06-10 09:55:40.406255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.422713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.422773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.875 [2024-06-10 09:55:40.422809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.423 ms 00:17:46.875 [2024-06-10 09:55:40.422820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.439439] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:46.875 [2024-06-10 09:55:40.439522] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:46.875 [2024-06-10 09:55:40.439547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.439561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:46.875 [2024-06-10 09:55:40.439580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.564 ms 00:17:46.875 [2024-06-10 09:55:40.439591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.468094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.468185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:46.875 [2024-06-10 09:55:40.468227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.316 ms 00:17:46.875 [2024-06-10 09:55:40.468239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.482243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.482296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:46.875 [2024-06-10 09:55:40.482330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.874 ms 00:17:46.875 [2024-06-10 09:55:40.482341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.496305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.496356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:46.875 [2024-06-10 09:55:40.496392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.883 ms 00:17:46.875 [2024-06-10 09:55:40.496402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.875 [2024-06-10 09:55:40.496892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.875 [2024-06-10 09:55:40.496954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.875 [2024-06-10 09:55:40.496971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:46.876 [2024-06-10 09:55:40.496982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.562960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.876 [2024-06-10 09:55:40.563045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:46.876 [2024-06-10 09:55:40.563084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.938 ms 00:17:46.876 [2024-06-10 09:55:40.563113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.574295] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:46.876 [2024-06-10 09:55:40.587170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.876 [2024-06-10 09:55:40.587292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:46.876 [2024-06-10 09:55:40.587313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.903 ms 00:17:46.876 [2024-06-10 09:55:40.587328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.587463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.876 [2024-06-10 09:55:40.587489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:46.876 [2024-06-10 09:55:40.587503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:46.876 [2024-06-10 09:55:40.587517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.587582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.876 [2024-06-10 09:55:40.587601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:46.876 [2024-06-10 09:55:40.587614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:46.876 [2024-06-10 09:55:40.587627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.589647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.876 [2024-06-10 09:55:40.589699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:46.876 [2024-06-10 09:55:40.589730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:17:46.876 [2024-06-10 09:55:40.589743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.589777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.876 [2024-06-10 09:55:40.589798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:46.876 [2024-06-10 09:55:40.589810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:46.876 [2024-06-10 09:55:40.589827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.589885] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:46.876 [2024-06-10 09:55:40.589920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.876 [2024-06-10 09:55:40.589931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:46.876 [2024-06-10 09:55:40.589945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:46.876 [2024-06-10 09:55:40.589955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.621324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.876 [2024-06-10 09:55:40.621383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:46.876 [2024-06-10 09:55:40.621420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.334 ms 00:17:46.876 [2024-06-10 09:55:40.621433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.621560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.876 [2024-06-10 09:55:40.621595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:46.876 [2024-06-10 09:55:40.621641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:46.876 [2024-06-10 09:55:40.621652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.876 [2024-06-10 09:55:40.622954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.876 [2024-06-10 09:55:40.627548] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.734 ms, result 0 00:17:46.876 [2024-06-10 09:55:40.628881] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:47.134 Some configs were skipped because the RPC state that can call them passed over. 00:17:47.134 09:55:40 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:47.134 [2024-06-10 09:55:40.898091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.134 [2024-06-10 09:55:40.898221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:47.134 [2024-06-10 09:55:40.898243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.675 ms 00:17:47.134 [2024-06-10 09:55:40.898262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.134 [2024-06-10 09:55:40.898326] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 28.914 ms, result 0 00:17:47.392 true 00:17:47.392 09:55:40 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:47.392 [2024-06-10 09:55:41.156792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.392 [2024-06-10 09:55:41.156851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:47.392 [2024-06-10 09:55:41.156875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.189 ms 00:17:47.392 [2024-06-10 09:55:41.156887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.392 [2024-06-10 09:55:41.156943] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 34.342 ms, result 0 00:17:47.650 true 00:17:47.650 09:55:41 -- ftl/trim.sh@81 -- # killprocess 73717 00:17:47.650 09:55:41 -- common/autotest_common.sh@926 -- # '[' -z 73717 ']' 00:17:47.650 09:55:41 -- common/autotest_common.sh@930 -- # kill -0 73717 00:17:47.650 09:55:41 -- common/autotest_common.sh@931 -- # uname 00:17:47.650 09:55:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:47.650 09:55:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73717 00:17:47.650 killing process with pid 73717 00:17:47.650 09:55:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:47.650 09:55:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:47.651 09:55:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73717' 00:17:47.651 09:55:41 -- common/autotest_common.sh@945 -- # kill 73717 00:17:47.651 09:55:41 -- common/autotest_common.sh@950 -- # wait 73717 00:17:48.590 [2024-06-10 09:55:42.058523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.058631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:48.590 [2024-06-10 09:55:42.058651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.590 [2024-06-10 09:55:42.058665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.058694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:48.590 [2024-06-10 09:55:42.061874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.061921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:48.590 [2024-06-10 09:55:42.061970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:17:48.590 [2024-06-10 09:55:42.061981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.062305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.062332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:48.590 [2024-06-10 09:55:42.062349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:48.590 [2024-06-10 09:55:42.062360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.066465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.066506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:48.590 [2024-06-10 09:55:42.066525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.076 ms 00:17:48.590 [2024-06-10 09:55:42.066540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.073878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.073943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:48.590 [2024-06-10 09:55:42.073977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.267 ms 00:17:48.590 [2024-06-10 09:55:42.073988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.086213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.086299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:48.590 [2024-06-10 09:55:42.086356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.133 ms 00:17:48.590 [2024-06-10 09:55:42.086367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.095014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.095094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:48.590 [2024-06-10 09:55:42.095142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.585 ms 00:17:48.590 [2024-06-10 09:55:42.095155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.095310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.095330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:48.590 [2024-06-10 09:55:42.095346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:48.590 [2024-06-10 09:55:42.095402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.108068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.108144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:48.590 [2024-06-10 09:55:42.108181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.621 ms 00:17:48.590 [2024-06-10 09:55:42.108192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.119643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.119696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:48.590 [2024-06-10 09:55:42.119752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.403 ms 00:17:48.590 [2024-06-10 09:55:42.119778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.132035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.132072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:48.590 [2024-06-10 09:55:42.132091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.210 ms 00:17:48.590 [2024-06-10 09:55:42.132103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.144841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.590 [2024-06-10 09:55:42.144891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:48.590 [2024-06-10 09:55:42.144942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.645 ms 00:17:48.590 [2024-06-10 09:55:42.144954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.590 [2024-06-10 09:55:42.145001] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:48.590 [2024-06-10 09:55:42.145025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:48.590 [2024-06-10 09:55:42.145837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.145991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:48.591 [2024-06-10 09:55:42.146454] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:48.591 [2024-06-10 09:55:42.146484] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22a6fe3e-092b-45c9-bec4-df5d368748af 00:17:48.591 [2024-06-10 09:55:42.146499] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:48.591 [2024-06-10 09:55:42.146513] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:48.591 [2024-06-10 09:55:42.146524] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:48.591 [2024-06-10 09:55:42.146538] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:48.591 [2024-06-10 09:55:42.146550] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:48.591 [2024-06-10 09:55:42.146563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:48.591 [2024-06-10 09:55:42.146575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:48.591 [2024-06-10 09:55:42.146587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:48.591 [2024-06-10 09:55:42.146598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:48.591 [2024-06-10 09:55:42.146612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.591 [2024-06-10 09:55:42.146638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:48.591 [2024-06-10 09:55:42.146653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:17:48.591 [2024-06-10 09:55:42.146665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-06-10 09:55:42.163462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.591 [2024-06-10 09:55:42.163502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:48.591 [2024-06-10 09:55:42.163524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.755 ms 00:17:48.591 [2024-06-10 09:55:42.163536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-06-10 09:55:42.163810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.591 [2024-06-10 09:55:42.163837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:48.591 [2024-06-10 09:55:42.163854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:17:48.591 [2024-06-10 09:55:42.163865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-06-10 09:55:42.219731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-06-10 09:55:42.219816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.591 [2024-06-10 09:55:42.219852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-06-10 09:55:42.219864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-06-10 09:55:42.219989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-06-10 09:55:42.220006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.591 [2024-06-10 09:55:42.220020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-06-10 09:55:42.220031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-06-10 09:55:42.220159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-06-10 09:55:42.220177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.591 [2024-06-10 09:55:42.220211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-06-10 09:55:42.220223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-06-10 09:55:42.220252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-06-10 09:55:42.220265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.591 [2024-06-10 09:55:42.220279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-06-10 09:55:42.220290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.591 [2024-06-10 09:55:42.320219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.591 [2024-06-10 09:55:42.320297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.591 [2024-06-10 09:55:42.320335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.591 [2024-06-10 09:55:42.320348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.848 [2024-06-10 09:55:42.360879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.848 [2024-06-10 09:55:42.360963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.848 [2024-06-10 09:55:42.360985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.848 [2024-06-10 09:55:42.360998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.848 [2024-06-10 09:55:42.361130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.848 [2024-06-10 09:55:42.361151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.848 [2024-06-10 09:55:42.361170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.848 [2024-06-10 09:55:42.361181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.848 [2024-06-10 09:55:42.361223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.848 [2024-06-10 09:55:42.361238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.848 [2024-06-10 09:55:42.361252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.848 [2024-06-10 09:55:42.361264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.848 [2024-06-10 09:55:42.361390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.848 [2024-06-10 09:55:42.361411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.848 [2024-06-10 09:55:42.361427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.848 [2024-06-10 09:55:42.361439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.848 [2024-06-10 09:55:42.361493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.848 [2024-06-10 09:55:42.361517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:48.848 [2024-06-10 09:55:42.361533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.848 [2024-06-10 09:55:42.361545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.848 [2024-06-10 09:55:42.361594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.848 [2024-06-10 09:55:42.361611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.848 [2024-06-10 09:55:42.361627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.848 [2024-06-10 09:55:42.361639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.848 [2024-06-10 09:55:42.361696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.848 [2024-06-10 09:55:42.361713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.848 [2024-06-10 09:55:42.361727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.848 [2024-06-10 09:55:42.361739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.849 [2024-06-10 09:55:42.361903] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 303.352 ms, result 0 00:17:49.782 09:55:43 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:49.782 09:55:43 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:49.782 [2024-06-10 09:55:43.495485] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:49.782 [2024-06-10 09:55:43.495703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73783 ] 00:17:50.039 [2024-06-10 09:55:43.661298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.298 [2024-06-10 09:55:43.828825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.557 [2024-06-10 09:55:44.105650] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:50.557 [2024-06-10 09:55:44.105759] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:50.557 [2024-06-10 09:55:44.259140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.259208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.557 [2024-06-10 09:55:44.259230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:50.557 [2024-06-10 09:55:44.259247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.262394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.262445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.557 [2024-06-10 09:55:44.262462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:17:50.557 [2024-06-10 09:55:44.262479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.262596] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.557 [2024-06-10 09:55:44.263596] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.557 [2024-06-10 09:55:44.263634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.263655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.557 [2024-06-10 09:55:44.263668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:17:50.557 [2024-06-10 09:55:44.263679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.264889] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:50.557 [2024-06-10 09:55:44.280999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.281055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:50.557 [2024-06-10 09:55:44.281087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.113 ms 00:17:50.557 [2024-06-10 09:55:44.281098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.281233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.281256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:50.557 [2024-06-10 09:55:44.281273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:50.557 [2024-06-10 09:55:44.281284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.285685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.285756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.557 [2024-06-10 09:55:44.285787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.342 ms 00:17:50.557 [2024-06-10 09:55:44.285798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.285935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.285959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.557 [2024-06-10 09:55:44.285972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:50.557 [2024-06-10 09:55:44.285982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.286037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.286052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.557 [2024-06-10 09:55:44.286064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:50.557 [2024-06-10 09:55:44.286075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.286112] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:50.557 [2024-06-10 09:55:44.290141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.290183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.557 [2024-06-10 09:55:44.290213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:17:50.557 [2024-06-10 09:55:44.290224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.290288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.290309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.557 [2024-06-10 09:55:44.290321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:50.557 [2024-06-10 09:55:44.290331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.290355] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:50.557 [2024-06-10 09:55:44.290380] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:50.557 [2024-06-10 09:55:44.290452] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:50.557 [2024-06-10 09:55:44.290472] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:50.557 [2024-06-10 09:55:44.290555] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:50.557 [2024-06-10 09:55:44.290569] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.557 [2024-06-10 09:55:44.290583] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:50.557 [2024-06-10 09:55:44.290597] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.557 [2024-06-10 09:55:44.290609] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.557 [2024-06-10 09:55:44.290620] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:50.557 [2024-06-10 09:55:44.290631] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.557 [2024-06-10 09:55:44.290641] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:50.557 [2024-06-10 09:55:44.290651] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:50.557 [2024-06-10 09:55:44.290663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.290677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.557 [2024-06-10 09:55:44.290689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:17:50.557 [2024-06-10 09:55:44.290699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.290779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.557 [2024-06-10 09:55:44.290794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.557 [2024-06-10 09:55:44.290806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:50.557 [2024-06-10 09:55:44.290816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.557 [2024-06-10 09:55:44.290903] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.557 [2024-06-10 09:55:44.290928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.557 [2024-06-10 09:55:44.290942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.557 [2024-06-10 09:55:44.290958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.557 [2024-06-10 09:55:44.290969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.557 [2024-06-10 09:55:44.290979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.557 [2024-06-10 09:55:44.290990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:50.557 [2024-06-10 09:55:44.291000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.557 [2024-06-10 09:55:44.291010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:50.557 [2024-06-10 09:55:44.291019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.557 [2024-06-10 09:55:44.291029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.557 [2024-06-10 09:55:44.291039] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:50.558 [2024-06-10 09:55:44.291048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.558 [2024-06-10 09:55:44.291058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.558 [2024-06-10 09:55:44.291068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:50.558 [2024-06-10 09:55:44.291077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.558 [2024-06-10 09:55:44.291087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.558 [2024-06-10 09:55:44.291097] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:50.558 [2024-06-10 09:55:44.291141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.558 [2024-06-10 09:55:44.291166] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:50.558 [2024-06-10 09:55:44.291177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:50.558 [2024-06-10 09:55:44.291188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:50.558 [2024-06-10 09:55:44.291198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.558 [2024-06-10 09:55:44.291227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:50.558 [2024-06-10 09:55:44.291238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.558 [2024-06-10 09:55:44.291248] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.558 [2024-06-10 09:55:44.291258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:50.558 [2024-06-10 09:55:44.291268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.558 [2024-06-10 09:55:44.291278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.558 [2024-06-10 09:55:44.291288] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:50.558 [2024-06-10 09:55:44.291298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.558 [2024-06-10 09:55:44.291308] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.558 [2024-06-10 09:55:44.291319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:50.558 [2024-06-10 09:55:44.291329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.558 [2024-06-10 09:55:44.291339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.558 [2024-06-10 09:55:44.291349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:50.558 [2024-06-10 09:55:44.291368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.558 [2024-06-10 09:55:44.291382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.558 [2024-06-10 09:55:44.291392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:50.558 [2024-06-10 09:55:44.291402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.558 [2024-06-10 09:55:44.291412] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.558 [2024-06-10 09:55:44.291423] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.558 [2024-06-10 09:55:44.291434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.558 [2024-06-10 09:55:44.291445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.558 [2024-06-10 09:55:44.291456] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.558 [2024-06-10 09:55:44.291466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.558 [2024-06-10 09:55:44.291476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.558 [2024-06-10 09:55:44.291487] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.558 [2024-06-10 09:55:44.291497] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.558 [2024-06-10 09:55:44.291507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.558 [2024-06-10 09:55:44.291519] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.558 [2024-06-10 09:55:44.291538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.558 [2024-06-10 09:55:44.291551] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:50.558 [2024-06-10 09:55:44.291562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:50.558 [2024-06-10 09:55:44.291574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:50.558 [2024-06-10 09:55:44.291585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:50.558 [2024-06-10 09:55:44.291596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:50.558 [2024-06-10 09:55:44.291607] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:50.558 [2024-06-10 09:55:44.291619] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:50.558 [2024-06-10 09:55:44.291630] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:50.558 [2024-06-10 09:55:44.291641] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:50.558 [2024-06-10 09:55:44.291652] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:50.558 [2024-06-10 09:55:44.291663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:50.558 [2024-06-10 09:55:44.291674] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:50.558 [2024-06-10 09:55:44.291686] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:50.558 [2024-06-10 09:55:44.291696] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.558 [2024-06-10 09:55:44.291708] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.558 [2024-06-10 09:55:44.291720] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.558 [2024-06-10 09:55:44.291732] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.558 [2024-06-10 09:55:44.291759] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.558 [2024-06-10 09:55:44.291770] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.558 [2024-06-10 09:55:44.291782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.558 [2024-06-10 09:55:44.291799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.558 [2024-06-10 09:55:44.291811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:17:50.558 [2024-06-10 09:55:44.291821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.558 [2024-06-10 09:55:44.308982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.558 [2024-06-10 09:55:44.309046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.558 [2024-06-10 09:55:44.309079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.078 ms 00:17:50.558 [2024-06-10 09:55:44.309090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.558 [2024-06-10 09:55:44.309267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.558 [2024-06-10 09:55:44.309287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:50.558 [2024-06-10 09:55:44.309300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:50.558 [2024-06-10 09:55:44.309310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.357321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.357392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.817 [2024-06-10 09:55:44.357427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.980 ms 00:17:50.817 [2024-06-10 09:55:44.357439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.357582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.357600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.817 [2024-06-10 09:55:44.357626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:50.817 [2024-06-10 09:55:44.357636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.358019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.358047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.817 [2024-06-10 09:55:44.358061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:17:50.817 [2024-06-10 09:55:44.358072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.358265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.358285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.817 [2024-06-10 09:55:44.358298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:17:50.817 [2024-06-10 09:55:44.358309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.376190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.376251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.817 [2024-06-10 09:55:44.376300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.849 ms 00:17:50.817 [2024-06-10 09:55:44.376311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.392988] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:50.817 [2024-06-10 09:55:44.393047] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:50.817 [2024-06-10 09:55:44.393095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.393123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:50.817 [2024-06-10 09:55:44.393151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.626 ms 00:17:50.817 [2024-06-10 09:55:44.393171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.420225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.420278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:50.817 [2024-06-10 09:55:44.420310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.950 ms 00:17:50.817 [2024-06-10 09:55:44.420327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.434739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.434792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:50.817 [2024-06-10 09:55:44.434823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.327 ms 00:17:50.817 [2024-06-10 09:55:44.434834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.449196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.449278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:50.817 [2024-06-10 09:55:44.449310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.281 ms 00:17:50.817 [2024-06-10 09:55:44.449321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.449794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.449824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:50.817 [2024-06-10 09:55:44.449839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:17:50.817 [2024-06-10 09:55:44.449849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.817 [2024-06-10 09:55:44.519227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.817 [2024-06-10 09:55:44.519309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:50.818 [2024-06-10 09:55:44.519345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.329 ms 00:17:50.818 [2024-06-10 09:55:44.519364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.818 [2024-06-10 09:55:44.531448] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:50.818 [2024-06-10 09:55:44.545028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.818 [2024-06-10 09:55:44.545144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:50.818 [2024-06-10 09:55:44.545166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.494 ms 00:17:50.818 [2024-06-10 09:55:44.545178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.818 [2024-06-10 09:55:44.545329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.818 [2024-06-10 09:55:44.545349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:50.818 [2024-06-10 09:55:44.545363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:50.818 [2024-06-10 09:55:44.545374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.818 [2024-06-10 09:55:44.545443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.818 [2024-06-10 09:55:44.545464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:50.818 [2024-06-10 09:55:44.545476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:50.818 [2024-06-10 09:55:44.545487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.818 [2024-06-10 09:55:44.547640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.818 [2024-06-10 09:55:44.547680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:50.818 [2024-06-10 09:55:44.547729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:17:50.818 [2024-06-10 09:55:44.547740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.818 [2024-06-10 09:55:44.547806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.818 [2024-06-10 09:55:44.547820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:50.818 [2024-06-10 09:55:44.547831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:50.818 [2024-06-10 09:55:44.547846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.818 [2024-06-10 09:55:44.547887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:50.818 [2024-06-10 09:55:44.547902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.818 [2024-06-10 09:55:44.547912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:50.818 [2024-06-10 09:55:44.547922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:50.818 [2024-06-10 09:55:44.547949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.818 [2024-06-10 09:55:44.576835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.818 [2024-06-10 09:55:44.576890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:50.818 [2024-06-10 09:55:44.576929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.840 ms 00:17:50.818 [2024-06-10 09:55:44.576940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.818 [2024-06-10 09:55:44.577058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.818 [2024-06-10 09:55:44.577077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:50.818 [2024-06-10 09:55:44.577090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:50.818 [2024-06-10 09:55:44.577100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.818 [2024-06-10 09:55:44.578200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.818 [2024-06-10 09:55:44.582530] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 318.551 ms, result 0 00:17:50.818 [2024-06-10 09:55:44.583394] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.076 [2024-06-10 09:55:44.600310] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:01.947  Copying: 26/256 [MB] (26 MBps) Copying: 49/256 [MB] (23 MBps) Copying: 73/256 [MB] (23 MBps) Copying: 96/256 [MB] (23 MBps) Copying: 120/256 [MB] (23 MBps) Copying: 142/256 [MB] (22 MBps) Copying: 165/256 [MB] (22 MBps) Copying: 189/256 [MB] (24 MBps) Copying: 212/256 [MB] (22 MBps) Copying: 236/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-06-10 09:55:55.406784] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:01.947 [2024-06-10 09:55:55.418755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.947 [2024-06-10 09:55:55.418826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:01.947 [2024-06-10 09:55:55.418861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:01.947 [2024-06-10 09:55:55.418880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.947 [2024-06-10 09:55:55.418911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:01.947 [2024-06-10 09:55:55.422097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.947 [2024-06-10 09:55:55.422151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:01.947 [2024-06-10 09:55:55.422182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.164 ms 00:18:01.947 [2024-06-10 09:55:55.422192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.947 [2024-06-10 09:55:55.422499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.947 [2024-06-10 09:55:55.422526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:01.947 [2024-06-10 09:55:55.422540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:18:01.947 [2024-06-10 09:55:55.422551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.947 [2024-06-10 09:55:55.426246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.947 [2024-06-10 09:55:55.426293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:01.947 [2024-06-10 09:55:55.426322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:18:01.947 [2024-06-10 09:55:55.426334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.947 [2024-06-10 09:55:55.433788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.947 [2024-06-10 09:55:55.433834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:01.947 [2024-06-10 09:55:55.433864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.414 ms 00:18:01.947 [2024-06-10 09:55:55.433875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.947 [2024-06-10 09:55:55.463961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.947 [2024-06-10 09:55:55.464016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:01.947 [2024-06-10 09:55:55.464048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.002 ms 00:18:01.947 [2024-06-10 09:55:55.464060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.947 [2024-06-10 09:55:55.481404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.947 [2024-06-10 09:55:55.481458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:01.947 [2024-06-10 09:55:55.481496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.256 ms 00:18:01.947 [2024-06-10 09:55:55.481507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.947 [2024-06-10 09:55:55.481675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.947 [2024-06-10 09:55:55.481712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:01.948 [2024-06-10 09:55:55.481726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:01.948 [2024-06-10 09:55:55.481736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.948 [2024-06-10 09:55:55.511793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.948 [2024-06-10 09:55:55.511860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:01.948 [2024-06-10 09:55:55.511905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.033 ms 00:18:01.948 [2024-06-10 09:55:55.511915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.948 [2024-06-10 09:55:55.541044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.948 [2024-06-10 09:55:55.541097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:01.948 [2024-06-10 09:55:55.541154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.053 ms 00:18:01.948 [2024-06-10 09:55:55.541166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.948 [2024-06-10 09:55:55.570261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.948 [2024-06-10 09:55:55.570315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:01.948 [2024-06-10 09:55:55.570347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.019 ms 00:18:01.948 [2024-06-10 09:55:55.570359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.948 [2024-06-10 09:55:55.601338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.948 [2024-06-10 09:55:55.601392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:01.948 [2024-06-10 09:55:55.601424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.874 ms 00:18:01.948 [2024-06-10 09:55:55.601435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.948 [2024-06-10 09:55:55.601511] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:01.948 [2024-06-10 09:55:55.601535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.601991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:01.948 [2024-06-10 09:55:55.602115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:01.949 [2024-06-10 09:55:55.602724] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:01.949 [2024-06-10 09:55:55.602750] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22a6fe3e-092b-45c9-bec4-df5d368748af 00:18:01.949 [2024-06-10 09:55:55.602762] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:01.949 [2024-06-10 09:55:55.602772] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:01.949 [2024-06-10 09:55:55.602783] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:01.949 [2024-06-10 09:55:55.602794] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:01.949 [2024-06-10 09:55:55.602804] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:01.949 [2024-06-10 09:55:55.602815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:01.949 [2024-06-10 09:55:55.602825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:01.949 [2024-06-10 09:55:55.602835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:01.949 [2024-06-10 09:55:55.602845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:01.949 [2024-06-10 09:55:55.602856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.949 [2024-06-10 09:55:55.602872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:01.949 [2024-06-10 09:55:55.602884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:18:01.949 [2024-06-10 09:55:55.602894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.949 [2024-06-10 09:55:55.619572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.949 [2024-06-10 09:55:55.619610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:01.949 [2024-06-10 09:55:55.619625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.651 ms 00:18:01.949 [2024-06-10 09:55:55.619637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.949 [2024-06-10 09:55:55.619910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.949 [2024-06-10 09:55:55.619937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:01.949 [2024-06-10 09:55:55.619951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:18:01.949 [2024-06-10 09:55:55.619961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.949 [2024-06-10 09:55:55.666948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.949 [2024-06-10 09:55:55.667012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:01.949 [2024-06-10 09:55:55.667044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.949 [2024-06-10 09:55:55.667056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.949 [2024-06-10 09:55:55.667182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.949 [2024-06-10 09:55:55.667202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:01.949 [2024-06-10 09:55:55.667214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.949 [2024-06-10 09:55:55.667225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.949 [2024-06-10 09:55:55.667307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.949 [2024-06-10 09:55:55.667326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:01.949 [2024-06-10 09:55:55.667338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.949 [2024-06-10 09:55:55.667349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.949 [2024-06-10 09:55:55.667385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.949 [2024-06-10 09:55:55.667406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:01.949 [2024-06-10 09:55:55.667418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.949 [2024-06-10 09:55:55.667428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.209 [2024-06-10 09:55:55.760629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.209 [2024-06-10 09:55:55.760708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.209 [2024-06-10 09:55:55.760742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.209 [2024-06-10 09:55:55.760753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.209 [2024-06-10 09:55:55.796404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.209 [2024-06-10 09:55:55.796456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.209 [2024-06-10 09:55:55.796487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.209 [2024-06-10 09:55:55.796498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.209 [2024-06-10 09:55:55.796555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.209 [2024-06-10 09:55:55.796572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.209 [2024-06-10 09:55:55.796584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.209 [2024-06-10 09:55:55.796595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.209 [2024-06-10 09:55:55.796628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.209 [2024-06-10 09:55:55.796640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.209 [2024-06-10 09:55:55.796658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.209 [2024-06-10 09:55:55.796668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.209 [2024-06-10 09:55:55.796816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.209 [2024-06-10 09:55:55.796835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.209 [2024-06-10 09:55:55.796847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.209 [2024-06-10 09:55:55.796858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.209 [2024-06-10 09:55:55.796909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.209 [2024-06-10 09:55:55.796931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:02.209 [2024-06-10 09:55:55.796944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.209 [2024-06-10 09:55:55.796960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.209 [2024-06-10 09:55:55.797008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.209 [2024-06-10 09:55:55.797023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.209 [2024-06-10 09:55:55.797034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.209 [2024-06-10 09:55:55.797044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.209 [2024-06-10 09:55:55.797097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.209 [2024-06-10 09:55:55.797147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.209 [2024-06-10 09:55:55.797168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.209 [2024-06-10 09:55:55.797183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.209 [2024-06-10 09:55:55.797348] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.598 ms, result 0 00:18:03.147 00:18:03.147 00:18:03.147 09:55:56 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:03.147 09:55:56 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:03.715 09:55:57 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:03.715 [2024-06-10 09:55:57.455506] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:03.715 [2024-06-10 09:55:57.455667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73932 ] 00:18:03.974 [2024-06-10 09:55:57.615243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.233 [2024-06-10 09:55:57.794640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.493 [2024-06-10 09:55:58.090774] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:04.493 [2024-06-10 09:55:58.090871] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:04.493 [2024-06-10 09:55:58.244890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.493 [2024-06-10 09:55:58.244948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:04.493 [2024-06-10 09:55:58.244984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:04.493 [2024-06-10 09:55:58.245001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.493 [2024-06-10 09:55:58.248085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.493 [2024-06-10 09:55:58.248156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:04.493 [2024-06-10 09:55:58.248188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.055 ms 00:18:04.493 [2024-06-10 09:55:58.248204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.493 [2024-06-10 09:55:58.248351] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:04.493 [2024-06-10 09:55:58.249307] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:04.493 [2024-06-10 09:55:58.249358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.493 [2024-06-10 09:55:58.249375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:04.493 [2024-06-10 09:55:58.249387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:18:04.493 [2024-06-10 09:55:58.249398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.493 [2024-06-10 09:55:58.250624] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:04.754 [2024-06-10 09:55:58.266564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.754 [2024-06-10 09:55:58.266618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:04.754 [2024-06-10 09:55:58.266650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.941 ms 00:18:04.754 [2024-06-10 09:55:58.266661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.754 [2024-06-10 09:55:58.266767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.754 [2024-06-10 09:55:58.266788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:04.754 [2024-06-10 09:55:58.266804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:04.754 [2024-06-10 09:55:58.266814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.754 [2024-06-10 09:55:58.270928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.754 [2024-06-10 09:55:58.270981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:04.754 [2024-06-10 09:55:58.271011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.029 ms 00:18:04.754 [2024-06-10 09:55:58.271022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.754 [2024-06-10 09:55:58.271185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.754 [2024-06-10 09:55:58.271212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:04.754 [2024-06-10 09:55:58.271225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:18:04.754 [2024-06-10 09:55:58.271236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.754 [2024-06-10 09:55:58.271277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.754 [2024-06-10 09:55:58.271293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:04.754 [2024-06-10 09:55:58.271305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:04.754 [2024-06-10 09:55:58.271317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.754 [2024-06-10 09:55:58.271353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:04.754 [2024-06-10 09:55:58.275533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.754 [2024-06-10 09:55:58.275572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:04.754 [2024-06-10 09:55:58.275588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:18:04.754 [2024-06-10 09:55:58.275599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.754 [2024-06-10 09:55:58.275667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.754 [2024-06-10 09:55:58.275691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:04.754 [2024-06-10 09:55:58.275704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:04.754 [2024-06-10 09:55:58.275730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.754 [2024-06-10 09:55:58.275775] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:04.754 [2024-06-10 09:55:58.275802] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:04.754 [2024-06-10 09:55:58.275874] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:04.754 [2024-06-10 09:55:58.275894] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:04.754 [2024-06-10 09:55:58.275981] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:04.754 [2024-06-10 09:55:58.275998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:04.754 [2024-06-10 09:55:58.276013] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:04.754 [2024-06-10 09:55:58.276027] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:04.754 [2024-06-10 09:55:58.276041] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:04.754 [2024-06-10 09:55:58.276053] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:04.754 [2024-06-10 09:55:58.276064] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:04.754 [2024-06-10 09:55:58.276075] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:04.754 [2024-06-10 09:55:58.276085] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:04.754 [2024-06-10 09:55:58.276097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.754 [2024-06-10 09:55:58.276114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:04.754 [2024-06-10 09:55:58.276126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:18:04.754 [2024-06-10 09:55:58.276137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.754 [2024-06-10 09:55:58.276232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.754 [2024-06-10 09:55:58.276250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:04.754 [2024-06-10 09:55:58.276263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:04.754 [2024-06-10 09:55:58.276274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.754 [2024-06-10 09:55:58.276362] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:04.754 [2024-06-10 09:55:58.276378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:04.754 [2024-06-10 09:55:58.276391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:04.754 [2024-06-10 09:55:58.276409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.754 [2024-06-10 09:55:58.276421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:04.754 [2024-06-10 09:55:58.276431] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:04.754 [2024-06-10 09:55:58.276443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:04.754 [2024-06-10 09:55:58.276453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:04.754 [2024-06-10 09:55:58.276463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:04.754 [2024-06-10 09:55:58.276473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:04.754 [2024-06-10 09:55:58.276483] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:04.754 [2024-06-10 09:55:58.276493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:04.754 [2024-06-10 09:55:58.276503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:04.754 [2024-06-10 09:55:58.276513] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:04.754 [2024-06-10 09:55:58.276523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:04.754 [2024-06-10 09:55:58.276533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.754 [2024-06-10 09:55:58.276543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:04.754 [2024-06-10 09:55:58.276553] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:04.754 [2024-06-10 09:55:58.276563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.754 [2024-06-10 09:55:58.276586] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:04.754 [2024-06-10 09:55:58.276596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:04.754 [2024-06-10 09:55:58.276606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:04.754 [2024-06-10 09:55:58.276617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:04.754 [2024-06-10 09:55:58.276627] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:04.754 [2024-06-10 09:55:58.276637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:04.754 [2024-06-10 09:55:58.276646] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:04.754 [2024-06-10 09:55:58.276656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:04.754 [2024-06-10 09:55:58.276666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:04.754 [2024-06-10 09:55:58.276676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:04.754 [2024-06-10 09:55:58.276686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:04.754 [2024-06-10 09:55:58.276696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:04.755 [2024-06-10 09:55:58.276706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:04.755 [2024-06-10 09:55:58.276716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:04.755 [2024-06-10 09:55:58.276726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:04.755 [2024-06-10 09:55:58.276736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:04.755 [2024-06-10 09:55:58.276746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:04.755 [2024-06-10 09:55:58.276755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:04.755 [2024-06-10 09:55:58.276765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:04.755 [2024-06-10 09:55:58.276776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:04.755 [2024-06-10 09:55:58.276786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:04.755 [2024-06-10 09:55:58.276795] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:04.755 [2024-06-10 09:55:58.276806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:04.755 [2024-06-10 09:55:58.276816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:04.755 [2024-06-10 09:55:58.276827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.755 [2024-06-10 09:55:58.276839] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:04.755 [2024-06-10 09:55:58.276849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:04.755 [2024-06-10 09:55:58.276859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:04.755 [2024-06-10 09:55:58.276869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:04.755 [2024-06-10 09:55:58.276879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:04.755 [2024-06-10 09:55:58.276889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:04.755 [2024-06-10 09:55:58.276901] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:04.755 [2024-06-10 09:55:58.276920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:04.755 [2024-06-10 09:55:58.276932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:04.755 [2024-06-10 09:55:58.276944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:04.755 [2024-06-10 09:55:58.276955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:04.755 [2024-06-10 09:55:58.276966] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:04.755 [2024-06-10 09:55:58.276977] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:04.755 [2024-06-10 09:55:58.276988] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:04.755 [2024-06-10 09:55:58.276999] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:04.755 [2024-06-10 09:55:58.277010] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:04.755 [2024-06-10 09:55:58.277021] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:04.755 [2024-06-10 09:55:58.277032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:04.755 [2024-06-10 09:55:58.277043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:04.755 [2024-06-10 09:55:58.277054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:04.755 [2024-06-10 09:55:58.277065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:04.755 [2024-06-10 09:55:58.277076] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:04.755 [2024-06-10 09:55:58.277089] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:04.755 [2024-06-10 09:55:58.277101] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:04.755 [2024-06-10 09:55:58.277127] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:04.755 [2024-06-10 09:55:58.277139] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:04.755 [2024-06-10 09:55:58.277151] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:04.755 [2024-06-10 09:55:58.277163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.277182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:04.755 [2024-06-10 09:55:58.277193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:18:04.755 [2024-06-10 09:55:58.277204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.294219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.294281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.755 [2024-06-10 09:55:58.294315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.933 ms 00:18:04.755 [2024-06-10 09:55:58.294326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.294468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.294486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:04.755 [2024-06-10 09:55:58.294499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:04.755 [2024-06-10 09:55:58.294509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.343629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.343712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:04.755 [2024-06-10 09:55:58.343761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.056 ms 00:18:04.755 [2024-06-10 09:55:58.343773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.343879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.343897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:04.755 [2024-06-10 09:55:58.343910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:04.755 [2024-06-10 09:55:58.343920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.344313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.344345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:04.755 [2024-06-10 09:55:58.344359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:18:04.755 [2024-06-10 09:55:58.344371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.344528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.344553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:04.755 [2024-06-10 09:55:58.344566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:04.755 [2024-06-10 09:55:58.344577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.361245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.361301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:04.755 [2024-06-10 09:55:58.361318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.637 ms 00:18:04.755 [2024-06-10 09:55:58.361330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.376331] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:04.755 [2024-06-10 09:55:58.376388] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:04.755 [2024-06-10 09:55:58.376420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.376432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:04.755 [2024-06-10 09:55:58.376444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.954 ms 00:18:04.755 [2024-06-10 09:55:58.376454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.403961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.404015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:04.755 [2024-06-10 09:55:58.404047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.419 ms 00:18:04.755 [2024-06-10 09:55:58.404065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.418587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.418640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:04.755 [2024-06-10 09:55:58.418671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.424 ms 00:18:04.755 [2024-06-10 09:55:58.418682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.433174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.433261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:04.755 [2024-06-10 09:55:58.433294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.408 ms 00:18:04.755 [2024-06-10 09:55:58.433304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.433836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.755 [2024-06-10 09:55:58.433870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:04.755 [2024-06-10 09:55:58.433886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:18:04.755 [2024-06-10 09:55:58.433897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.755 [2024-06-10 09:55:58.505298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.756 [2024-06-10 09:55:58.505385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:04.756 [2024-06-10 09:55:58.505421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.366 ms 00:18:04.756 [2024-06-10 09:55:58.505432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.756 [2024-06-10 09:55:58.517782] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:05.015 [2024-06-10 09:55:58.531261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.015 [2024-06-10 09:55:58.531337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:05.015 [2024-06-10 09:55:58.531395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.667 ms 00:18:05.015 [2024-06-10 09:55:58.531408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.015 [2024-06-10 09:55:58.531540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.015 [2024-06-10 09:55:58.531560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:05.015 [2024-06-10 09:55:58.531573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:05.015 [2024-06-10 09:55:58.531585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.015 [2024-06-10 09:55:58.531657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.015 [2024-06-10 09:55:58.531680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:05.015 [2024-06-10 09:55:58.531692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:05.015 [2024-06-10 09:55:58.531703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.015 [2024-06-10 09:55:58.533600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.015 [2024-06-10 09:55:58.533650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:05.015 [2024-06-10 09:55:58.533681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:18:05.015 [2024-06-10 09:55:58.533691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.015 [2024-06-10 09:55:58.533745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.015 [2024-06-10 09:55:58.533759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:05.015 [2024-06-10 09:55:58.533771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:05.015 [2024-06-10 09:55:58.533788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.016 [2024-06-10 09:55:58.533829] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:05.016 [2024-06-10 09:55:58.533844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.016 [2024-06-10 09:55:58.533855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:05.016 [2024-06-10 09:55:58.533866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:05.016 [2024-06-10 09:55:58.533878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.016 [2024-06-10 09:55:58.563521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.016 [2024-06-10 09:55:58.563564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:05.016 [2024-06-10 09:55:58.563589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.596 ms 00:18:05.016 [2024-06-10 09:55:58.563601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.016 [2024-06-10 09:55:58.563740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.016 [2024-06-10 09:55:58.563775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:05.016 [2024-06-10 09:55:58.563788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:05.016 [2024-06-10 09:55:58.563799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.016 [2024-06-10 09:55:58.564673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:05.016 [2024-06-10 09:55:58.568815] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.470 ms, result 0 00:18:05.016 [2024-06-10 09:55:58.569721] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:05.016 [2024-06-10 09:55:58.585883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:05.016  Copying: 4096/4096 [kB] (average 25 MBps)[2024-06-10 09:55:58.748272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:05.016 [2024-06-10 09:55:58.759666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.016 [2024-06-10 09:55:58.759726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:05.016 [2024-06-10 09:55:58.759761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:05.016 [2024-06-10 09:55:58.759795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.016 [2024-06-10 09:55:58.759827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:05.016 [2024-06-10 09:55:58.762904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.016 [2024-06-10 09:55:58.762948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:05.016 [2024-06-10 09:55:58.762979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.057 ms 00:18:05.016 [2024-06-10 09:55:58.762989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.016 [2024-06-10 09:55:58.764889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.016 [2024-06-10 09:55:58.764944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:05.016 [2024-06-10 09:55:58.764975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.872 ms 00:18:05.016 [2024-06-10 09:55:58.764986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.016 [2024-06-10 09:55:58.768993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.016 [2024-06-10 09:55:58.769038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:05.016 [2024-06-10 09:55:58.769054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:18:05.016 [2024-06-10 09:55:58.769065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.016 [2024-06-10 09:55:58.776285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.016 [2024-06-10 09:55:58.776347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:05.016 [2024-06-10 09:55:58.776377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.167 ms 00:18:05.016 [2024-06-10 09:55:58.776388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.287 [2024-06-10 09:55:58.807340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.287 [2024-06-10 09:55:58.807399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:05.287 [2024-06-10 09:55:58.807417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.869 ms 00:18:05.287 [2024-06-10 09:55:58.807429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.287 [2024-06-10 09:55:58.826588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.287 [2024-06-10 09:55:58.826649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:05.287 [2024-06-10 09:55:58.826674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.067 ms 00:18:05.287 [2024-06-10 09:55:58.826686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.287 [2024-06-10 09:55:58.826877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.287 [2024-06-10 09:55:58.826899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:05.287 [2024-06-10 09:55:58.826913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:05.287 [2024-06-10 09:55:58.826925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.287 [2024-06-10 09:55:58.859024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.287 [2024-06-10 09:55:58.859099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:05.287 [2024-06-10 09:55:58.859160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.073 ms 00:18:05.287 [2024-06-10 09:55:58.859172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.287 [2024-06-10 09:55:58.889743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.287 [2024-06-10 09:55:58.889813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:05.287 [2024-06-10 09:55:58.889831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.446 ms 00:18:05.287 [2024-06-10 09:55:58.889842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.287 [2024-06-10 09:55:58.919111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.287 [2024-06-10 09:55:58.919218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:05.287 [2024-06-10 09:55:58.919234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.175 ms 00:18:05.287 [2024-06-10 09:55:58.919245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.288 [2024-06-10 09:55:58.949489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.288 [2024-06-10 09:55:58.949544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:05.288 [2024-06-10 09:55:58.949560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.136 ms 00:18:05.288 [2024-06-10 09:55:58.949569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.288 [2024-06-10 09:55:58.949645] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:05.288 [2024-06-10 09:55:58.949670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.949990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:05.288 [2024-06-10 09:55:58.950698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:05.289 [2024-06-10 09:55:58.950892] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:05.289 [2024-06-10 09:55:58.950920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22a6fe3e-092b-45c9-bec4-df5d368748af 00:18:05.289 [2024-06-10 09:55:58.950933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:05.289 [2024-06-10 09:55:58.950944] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:05.289 [2024-06-10 09:55:58.950955] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:05.289 [2024-06-10 09:55:58.950966] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:05.289 [2024-06-10 09:55:58.950977] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:05.289 [2024-06-10 09:55:58.950988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:05.289 [2024-06-10 09:55:58.950999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:05.289 [2024-06-10 09:55:58.951009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:05.289 [2024-06-10 09:55:58.951019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:05.289 [2024-06-10 09:55:58.951030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.289 [2024-06-10 09:55:58.951046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:05.289 [2024-06-10 09:55:58.951058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:18:05.289 [2024-06-10 09:55:58.951069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.289 [2024-06-10 09:55:58.968078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.289 [2024-06-10 09:55:58.968132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:05.289 [2024-06-10 09:55:58.968150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.982 ms 00:18:05.289 [2024-06-10 09:55:58.968162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.289 [2024-06-10 09:55:58.968439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.289 [2024-06-10 09:55:58.968466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:05.289 [2024-06-10 09:55:58.968480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:18:05.289 [2024-06-10 09:55:58.968491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.289 [2024-06-10 09:55:59.017809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.289 [2024-06-10 09:55:59.017882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.289 [2024-06-10 09:55:59.017900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.289 [2024-06-10 09:55:59.017912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.289 [2024-06-10 09:55:59.018056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.289 [2024-06-10 09:55:59.018075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.289 [2024-06-10 09:55:59.018088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.289 [2024-06-10 09:55:59.018123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.289 [2024-06-10 09:55:59.018192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.289 [2024-06-10 09:55:59.018210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.289 [2024-06-10 09:55:59.018222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.289 [2024-06-10 09:55:59.018234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.289 [2024-06-10 09:55:59.018266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.289 [2024-06-10 09:55:59.018280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.289 [2024-06-10 09:55:59.018292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.289 [2024-06-10 09:55:59.018303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.573 [2024-06-10 09:55:59.116639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.573 [2024-06-10 09:55:59.116738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.573 [2024-06-10 09:55:59.116757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.573 [2024-06-10 09:55:59.116769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.573 [2024-06-10 09:55:59.154997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.573 [2024-06-10 09:55:59.155055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.573 [2024-06-10 09:55:59.155086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.573 [2024-06-10 09:55:59.155097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.573 [2024-06-10 09:55:59.155204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.573 [2024-06-10 09:55:59.155224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:05.573 [2024-06-10 09:55:59.155237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.573 [2024-06-10 09:55:59.155249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.574 [2024-06-10 09:55:59.155284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.574 [2024-06-10 09:55:59.155305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:05.574 [2024-06-10 09:55:59.155317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.574 [2024-06-10 09:55:59.155328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.574 [2024-06-10 09:55:59.155463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.574 [2024-06-10 09:55:59.155483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:05.574 [2024-06-10 09:55:59.155495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.574 [2024-06-10 09:55:59.155507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.574 [2024-06-10 09:55:59.155563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.574 [2024-06-10 09:55:59.155587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:05.574 [2024-06-10 09:55:59.155607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.574 [2024-06-10 09:55:59.155618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.574 [2024-06-10 09:55:59.155666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.574 [2024-06-10 09:55:59.155681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:05.574 [2024-06-10 09:55:59.155693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.574 [2024-06-10 09:55:59.155704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.574 [2024-06-10 09:55:59.155759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.574 [2024-06-10 09:55:59.155776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:05.574 [2024-06-10 09:55:59.155794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.574 [2024-06-10 09:55:59.155810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.574 [2024-06-10 09:55:59.155975] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.315 ms, result 0 00:18:06.511 00:18:06.511 00:18:06.511 09:56:00 -- ftl/trim.sh@93 -- # svcpid=73967 00:18:06.511 09:56:00 -- ftl/trim.sh@94 -- # waitforlisten 73967 00:18:06.511 09:56:00 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:06.511 09:56:00 -- common/autotest_common.sh@819 -- # '[' -z 73967 ']' 00:18:06.511 09:56:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.511 09:56:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:06.511 09:56:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.511 09:56:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:06.511 09:56:00 -- common/autotest_common.sh@10 -- # set +x 00:18:06.770 [2024-06-10 09:56:00.303687] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:06.770 [2024-06-10 09:56:00.303852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73967 ] 00:18:06.770 [2024-06-10 09:56:00.471171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.028 [2024-06-10 09:56:00.640707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:07.028 [2024-06-10 09:56:00.640967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.405 09:56:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:08.405 09:56:01 -- common/autotest_common.sh@852 -- # return 0 00:18:08.405 09:56:01 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:08.664 [2024-06-10 09:56:02.204865] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:08.664 [2024-06-10 09:56:02.204974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:08.664 [2024-06-10 09:56:02.374712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.374787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:08.664 [2024-06-10 09:56:02.374826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:08.664 [2024-06-10 09:56:02.374839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.378005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.378062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.664 [2024-06-10 09:56:02.378097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:18:08.664 [2024-06-10 09:56:02.378110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.378263] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:08.664 [2024-06-10 09:56:02.379225] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:08.664 [2024-06-10 09:56:02.379269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.379284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.664 [2024-06-10 09:56:02.379299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:18:08.664 [2024-06-10 09:56:02.379311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.380586] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:08.664 [2024-06-10 09:56:02.396802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.396867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:08.664 [2024-06-10 09:56:02.396901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.222 ms 00:18:08.664 [2024-06-10 09:56:02.396916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.397025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.397048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:08.664 [2024-06-10 09:56:02.397062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:08.664 [2024-06-10 09:56:02.397076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.401373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.401419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.664 [2024-06-10 09:56:02.401436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.203 ms 00:18:08.664 [2024-06-10 09:56:02.401454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.401578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.401601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.664 [2024-06-10 09:56:02.401614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:08.664 [2024-06-10 09:56:02.401628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.401663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.401702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:08.664 [2024-06-10 09:56:02.401714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:08.664 [2024-06-10 09:56:02.401728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.401766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:08.664 [2024-06-10 09:56:02.406020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.406071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.664 [2024-06-10 09:56:02.406105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.262 ms 00:18:08.664 [2024-06-10 09:56:02.406144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.406217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.664 [2024-06-10 09:56:02.406236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:08.664 [2024-06-10 09:56:02.406252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:08.664 [2024-06-10 09:56:02.406264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.664 [2024-06-10 09:56:02.406295] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:08.665 [2024-06-10 09:56:02.406324] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:08.665 [2024-06-10 09:56:02.406367] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:08.665 [2024-06-10 09:56:02.406388] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:08.665 [2024-06-10 09:56:02.406476] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:08.665 [2024-06-10 09:56:02.406493] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:08.665 [2024-06-10 09:56:02.406513] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:08.665 [2024-06-10 09:56:02.406528] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:08.665 [2024-06-10 09:56:02.406547] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:08.665 [2024-06-10 09:56:02.406561] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:08.665 [2024-06-10 09:56:02.406575] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:08.665 [2024-06-10 09:56:02.406586] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:08.665 [2024-06-10 09:56:02.406601] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:08.665 [2024-06-10 09:56:02.406614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.665 [2024-06-10 09:56:02.406628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:08.665 [2024-06-10 09:56:02.406641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:18:08.665 [2024-06-10 09:56:02.406654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.665 [2024-06-10 09:56:02.406731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.665 [2024-06-10 09:56:02.406749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:08.665 [2024-06-10 09:56:02.406774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:08.665 [2024-06-10 09:56:02.406788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.665 [2024-06-10 09:56:02.406878] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:08.665 [2024-06-10 09:56:02.406897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:08.665 [2024-06-10 09:56:02.406910] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.665 [2024-06-10 09:56:02.406926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.665 [2024-06-10 09:56:02.406939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:08.665 [2024-06-10 09:56:02.406953] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:08.665 [2024-06-10 09:56:02.406966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:08.665 [2024-06-10 09:56:02.406982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:08.665 [2024-06-10 09:56:02.406994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:08.665 [2024-06-10 09:56:02.407007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.665 [2024-06-10 09:56:02.407019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:08.665 [2024-06-10 09:56:02.407032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:08.665 [2024-06-10 09:56:02.407044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.665 [2024-06-10 09:56:02.407057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:08.665 [2024-06-10 09:56:02.407069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:08.665 [2024-06-10 09:56:02.407082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.665 [2024-06-10 09:56:02.407094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:08.665 [2024-06-10 09:56:02.407122] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:08.665 [2024-06-10 09:56:02.407137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.665 [2024-06-10 09:56:02.407151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:08.665 [2024-06-10 09:56:02.407162] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:08.665 [2024-06-10 09:56:02.407176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:08.665 [2024-06-10 09:56:02.407187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:08.665 [2024-06-10 09:56:02.407202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:08.665 [2024-06-10 09:56:02.407214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:08.665 [2024-06-10 09:56:02.407233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:08.665 [2024-06-10 09:56:02.407244] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:08.665 [2024-06-10 09:56:02.407258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:08.665 [2024-06-10 09:56:02.407269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:08.665 [2024-06-10 09:56:02.407283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:08.665 [2024-06-10 09:56:02.407306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:08.665 [2024-06-10 09:56:02.407321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:08.665 [2024-06-10 09:56:02.407332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:08.665 [2024-06-10 09:56:02.407345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:08.665 [2024-06-10 09:56:02.407357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:08.665 [2024-06-10 09:56:02.407382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:08.665 [2024-06-10 09:56:02.407395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.665 [2024-06-10 09:56:02.407408] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:08.665 [2024-06-10 09:56:02.407421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:08.665 [2024-06-10 09:56:02.407437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.665 [2024-06-10 09:56:02.407448] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:08.665 [2024-06-10 09:56:02.407462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:08.665 [2024-06-10 09:56:02.407473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.665 [2024-06-10 09:56:02.407490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.665 [2024-06-10 09:56:02.407502] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:08.665 [2024-06-10 09:56:02.407516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:08.665 [2024-06-10 09:56:02.407528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:08.665 [2024-06-10 09:56:02.407541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:08.665 [2024-06-10 09:56:02.407552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:08.665 [2024-06-10 09:56:02.407566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:08.665 [2024-06-10 09:56:02.407578] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:08.665 [2024-06-10 09:56:02.407595] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.665 [2024-06-10 09:56:02.407609] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:08.665 [2024-06-10 09:56:02.407625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:08.665 [2024-06-10 09:56:02.407638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:08.665 [2024-06-10 09:56:02.407654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:08.665 [2024-06-10 09:56:02.407667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:08.665 [2024-06-10 09:56:02.407681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:08.665 [2024-06-10 09:56:02.407693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:08.665 [2024-06-10 09:56:02.407707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:08.665 [2024-06-10 09:56:02.407720] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:08.665 [2024-06-10 09:56:02.407734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:08.665 [2024-06-10 09:56:02.407747] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:08.665 [2024-06-10 09:56:02.407761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:08.665 [2024-06-10 09:56:02.407774] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:08.665 [2024-06-10 09:56:02.407788] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:08.665 [2024-06-10 09:56:02.407801] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.665 [2024-06-10 09:56:02.407816] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:08.665 [2024-06-10 09:56:02.407829] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:08.665 [2024-06-10 09:56:02.407843] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:08.665 [2024-06-10 09:56:02.407857] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:08.665 [2024-06-10 09:56:02.407876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.665 [2024-06-10 09:56:02.407889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:08.665 [2024-06-10 09:56:02.407903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:18:08.665 [2024-06-10 09:56:02.407915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.665 [2024-06-10 09:56:02.425461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.665 [2024-06-10 09:56:02.425521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.665 [2024-06-10 09:56:02.425558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.460 ms 00:18:08.666 [2024-06-10 09:56:02.425571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.666 [2024-06-10 09:56:02.425734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.666 [2024-06-10 09:56:02.425771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:08.666 [2024-06-10 09:56:02.425787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:08.666 [2024-06-10 09:56:02.425799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.463248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.463316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.925 [2024-06-10 09:56:02.463354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.416 ms 00:18:08.925 [2024-06-10 09:56:02.463377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.463501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.463519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.925 [2024-06-10 09:56:02.463535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.925 [2024-06-10 09:56:02.463549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.463870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.463899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.925 [2024-06-10 09:56:02.463918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:18:08.925 [2024-06-10 09:56:02.463931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.464079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.464118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.925 [2024-06-10 09:56:02.464138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:08.925 [2024-06-10 09:56:02.464150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.481832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.481895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.925 [2024-06-10 09:56:02.481947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.646 ms 00:18:08.925 [2024-06-10 09:56:02.481961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.497604] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:08.925 [2024-06-10 09:56:02.497659] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:08.925 [2024-06-10 09:56:02.497679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.497692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:08.925 [2024-06-10 09:56:02.497708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.568 ms 00:18:08.925 [2024-06-10 09:56:02.497720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.526345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.526428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:08.925 [2024-06-10 09:56:02.526472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.445 ms 00:18:08.925 [2024-06-10 09:56:02.526485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.541896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.541951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:08.925 [2024-06-10 09:56:02.541986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.264 ms 00:18:08.925 [2024-06-10 09:56:02.541998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.557691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.557733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:08.925 [2024-06-10 09:56:02.557756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.605 ms 00:18:08.925 [2024-06-10 09:56:02.557769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.558273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.558303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:08.925 [2024-06-10 09:56:02.558321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:18:08.925 [2024-06-10 09:56:02.558334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.632330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.632416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:08.925 [2024-06-10 09:56:02.632455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.954 ms 00:18:08.925 [2024-06-10 09:56:02.632471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.644851] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:08.925 [2024-06-10 09:56:02.659153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.659248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:08.925 [2024-06-10 09:56:02.659287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.528 ms 00:18:08.925 [2024-06-10 09:56:02.659302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.659438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.659464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:08.925 [2024-06-10 09:56:02.659479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:08.925 [2024-06-10 09:56:02.659493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.659555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.659574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:08.925 [2024-06-10 09:56:02.659587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:08.925 [2024-06-10 09:56:02.659601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.661646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.661699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:08.925 [2024-06-10 09:56:02.661746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:18:08.925 [2024-06-10 09:56:02.661761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.661797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.661819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:08.925 [2024-06-10 09:56:02.661832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:08.925 [2024-06-10 09:56:02.661849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.925 [2024-06-10 09:56:02.661894] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:08.925 [2024-06-10 09:56:02.661915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.925 [2024-06-10 09:56:02.661927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:08.925 [2024-06-10 09:56:02.661942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:08.925 [2024-06-10 09:56:02.661954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.184 [2024-06-10 09:56:02.692906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.184 [2024-06-10 09:56:02.692964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:09.184 [2024-06-10 09:56:02.693001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.917 ms 00:18:09.184 [2024-06-10 09:56:02.693013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.184 [2024-06-10 09:56:02.693170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.184 [2024-06-10 09:56:02.693191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:09.184 [2024-06-10 09:56:02.693208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:09.184 [2024-06-10 09:56:02.693220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.184 [2024-06-10 09:56:02.694275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:09.184 [2024-06-10 09:56:02.698312] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.211 ms, result 0 00:18:09.184 [2024-06-10 09:56:02.699360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:09.184 Some configs were skipped because the RPC state that can call them passed over. 00:18:09.184 09:56:02 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:09.443 [2024-06-10 09:56:03.020789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.443 [2024-06-10 09:56:03.020860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:09.443 [2024-06-10 09:56:03.020883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.910 ms 00:18:09.443 [2024-06-10 09:56:03.020899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.443 [2024-06-10 09:56:03.020951] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 33.076 ms, result 0 00:18:09.443 true 00:18:09.443 09:56:03 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:09.702 [2024-06-10 09:56:03.277279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.702 [2024-06-10 09:56:03.277354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:09.702 [2024-06-10 09:56:03.277395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.957 ms 00:18:09.702 [2024-06-10 09:56:03.277408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.702 [2024-06-10 09:56:03.277461] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.158 ms, result 0 00:18:09.702 true 00:18:09.702 09:56:03 -- ftl/trim.sh@102 -- # killprocess 73967 00:18:09.702 09:56:03 -- common/autotest_common.sh@926 -- # '[' -z 73967 ']' 00:18:09.702 09:56:03 -- common/autotest_common.sh@930 -- # kill -0 73967 00:18:09.702 09:56:03 -- common/autotest_common.sh@931 -- # uname 00:18:09.702 09:56:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:09.702 09:56:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73967 00:18:09.702 killing process with pid 73967 00:18:09.702 09:56:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:09.702 09:56:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:09.702 09:56:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73967' 00:18:09.703 09:56:03 -- common/autotest_common.sh@945 -- # kill 73967 00:18:09.703 09:56:03 -- common/autotest_common.sh@950 -- # wait 73967 00:18:10.641 [2024-06-10 09:56:04.220621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.220716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.641 [2024-06-10 09:56:04.220753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.641 [2024-06-10 09:56:04.220768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.220800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:10.641 [2024-06-10 09:56:04.224113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.224186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.641 [2024-06-10 09:56:04.224212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.287 ms 00:18:10.641 [2024-06-10 09:56:04.224225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.224546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.224575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.641 [2024-06-10 09:56:04.224593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:10.641 [2024-06-10 09:56:04.224605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.228857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.228896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:10.641 [2024-06-10 09:56:04.228915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:18:10.641 [2024-06-10 09:56:04.228930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.236532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.236582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:10.641 [2024-06-10 09:56:04.236615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.551 ms 00:18:10.641 [2024-06-10 09:56:04.236628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.249021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.249075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:10.641 [2024-06-10 09:56:04.249135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.325 ms 00:18:10.641 [2024-06-10 09:56:04.249150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.257481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.257555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:10.641 [2024-06-10 09:56:04.257579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.278 ms 00:18:10.641 [2024-06-10 09:56:04.257592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.257753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.257773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:10.641 [2024-06-10 09:56:04.257789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:10.641 [2024-06-10 09:56:04.257801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.270662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.270716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:10.641 [2024-06-10 09:56:04.270751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.832 ms 00:18:10.641 [2024-06-10 09:56:04.270764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.283499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.283541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:10.641 [2024-06-10 09:56:04.283566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.684 ms 00:18:10.641 [2024-06-10 09:56:04.283579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.295858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.295896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:10.641 [2024-06-10 09:56:04.295915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.230 ms 00:18:10.641 [2024-06-10 09:56:04.295927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.308187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.641 [2024-06-10 09:56:04.308224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:10.641 [2024-06-10 09:56:04.308243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.168 ms 00:18:10.641 [2024-06-10 09:56:04.308255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.641 [2024-06-10 09:56:04.308302] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:10.641 [2024-06-10 09:56:04.308327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:10.641 [2024-06-10 09:56:04.308916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.308928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.308942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.308954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.308969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.308981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.308995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:10.642 [2024-06-10 09:56:04.309710] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:10.642 [2024-06-10 09:56:04.309741] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22a6fe3e-092b-45c9-bec4-df5d368748af 00:18:10.642 [2024-06-10 09:56:04.309757] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:10.642 [2024-06-10 09:56:04.309770] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:10.642 [2024-06-10 09:56:04.309782] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:10.642 [2024-06-10 09:56:04.309796] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:10.642 [2024-06-10 09:56:04.309807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:10.642 [2024-06-10 09:56:04.309821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:10.642 [2024-06-10 09:56:04.309833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:10.642 [2024-06-10 09:56:04.309845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:10.642 [2024-06-10 09:56:04.309855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:10.642 [2024-06-10 09:56:04.309869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.642 [2024-06-10 09:56:04.309882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:10.642 [2024-06-10 09:56:04.309896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.571 ms 00:18:10.642 [2024-06-10 09:56:04.309908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.642 [2024-06-10 09:56:04.326362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.642 [2024-06-10 09:56:04.326428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:10.642 [2024-06-10 09:56:04.326452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.406 ms 00:18:10.642 [2024-06-10 09:56:04.326465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.642 [2024-06-10 09:56:04.326753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.642 [2024-06-10 09:56:04.326784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:10.642 [2024-06-10 09:56:04.326802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:18:10.642 [2024-06-10 09:56:04.326814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.642 [2024-06-10 09:56:04.382658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.642 [2024-06-10 09:56:04.382729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.642 [2024-06-10 09:56:04.382766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.642 [2024-06-10 09:56:04.382779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.642 [2024-06-10 09:56:04.382902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.642 [2024-06-10 09:56:04.382921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.642 [2024-06-10 09:56:04.382935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.642 [2024-06-10 09:56:04.382947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.642 [2024-06-10 09:56:04.383036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.642 [2024-06-10 09:56:04.383054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.642 [2024-06-10 09:56:04.383072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.642 [2024-06-10 09:56:04.383084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.642 [2024-06-10 09:56:04.383113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.642 [2024-06-10 09:56:04.383126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.642 [2024-06-10 09:56:04.383156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.642 [2024-06-10 09:56:04.383170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.902 [2024-06-10 09:56:04.482314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.902 [2024-06-10 09:56:04.482396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.902 [2024-06-10 09:56:04.482434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.902 [2024-06-10 09:56:04.482447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.902 [2024-06-10 09:56:04.519912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.902 [2024-06-10 09:56:04.519976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.902 [2024-06-10 09:56:04.519997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.902 [2024-06-10 09:56:04.520009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.902 [2024-06-10 09:56:04.520106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.902 [2024-06-10 09:56:04.520144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.902 [2024-06-10 09:56:04.520182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.902 [2024-06-10 09:56:04.520195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.902 [2024-06-10 09:56:04.520236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.902 [2024-06-10 09:56:04.520250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.902 [2024-06-10 09:56:04.520264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.902 [2024-06-10 09:56:04.520275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.902 [2024-06-10 09:56:04.520404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.902 [2024-06-10 09:56:04.520425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.902 [2024-06-10 09:56:04.520440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.902 [2024-06-10 09:56:04.520452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.902 [2024-06-10 09:56:04.520512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.902 [2024-06-10 09:56:04.520531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.902 [2024-06-10 09:56:04.520547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.902 [2024-06-10 09:56:04.520559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.902 [2024-06-10 09:56:04.520607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.902 [2024-06-10 09:56:04.520624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.902 [2024-06-10 09:56:04.520641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.902 [2024-06-10 09:56:04.520653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.902 [2024-06-10 09:56:04.520710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.902 [2024-06-10 09:56:04.520727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.902 [2024-06-10 09:56:04.520741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.902 [2024-06-10 09:56:04.520753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.902 [2024-06-10 09:56:04.520918] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 300.267 ms, result 0 00:18:11.839 09:56:05 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:12.098 [2024-06-10 09:56:05.681810] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:12.098 [2024-06-10 09:56:05.681993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74034 ] 00:18:12.098 [2024-06-10 09:56:05.850364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.357 [2024-06-10 09:56:06.026148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.616 [2024-06-10 09:56:06.326422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.616 [2024-06-10 09:56:06.326528] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.876 [2024-06-10 09:56:06.481984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.876 [2024-06-10 09:56:06.482067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:12.876 [2024-06-10 09:56:06.482103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:12.876 [2024-06-10 09:56:06.482130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.876 [2024-06-10 09:56:06.485276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.876 [2024-06-10 09:56:06.485331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.876 [2024-06-10 09:56:06.485362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:18:12.876 [2024-06-10 09:56:06.485378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.876 [2024-06-10 09:56:06.485508] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:12.876 [2024-06-10 09:56:06.486469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:12.876 [2024-06-10 09:56:06.486519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.876 [2024-06-10 09:56:06.486537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.876 [2024-06-10 09:56:06.486549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:18:12.876 [2024-06-10 09:56:06.486560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.876 [2024-06-10 09:56:06.487869] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:12.876 [2024-06-10 09:56:06.503116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.876 [2024-06-10 09:56:06.503192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:12.876 [2024-06-10 09:56:06.503241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.248 ms 00:18:12.876 [2024-06-10 09:56:06.503252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.876 [2024-06-10 09:56:06.503363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.876 [2024-06-10 09:56:06.503411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:12.876 [2024-06-10 09:56:06.503429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:12.877 [2024-06-10 09:56:06.503440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.877 [2024-06-10 09:56:06.507593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.877 [2024-06-10 09:56:06.507632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.877 [2024-06-10 09:56:06.507647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.094 ms 00:18:12.877 [2024-06-10 09:56:06.507657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.877 [2024-06-10 09:56:06.507804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.877 [2024-06-10 09:56:06.507829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.877 [2024-06-10 09:56:06.507841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:12.877 [2024-06-10 09:56:06.507852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.877 [2024-06-10 09:56:06.507905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.877 [2024-06-10 09:56:06.507921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:12.877 [2024-06-10 09:56:06.507933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:12.877 [2024-06-10 09:56:06.507943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.877 [2024-06-10 09:56:06.507978] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:12.877 [2024-06-10 09:56:06.512215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.877 [2024-06-10 09:56:06.512266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.877 [2024-06-10 09:56:06.512295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:18:12.877 [2024-06-10 09:56:06.512305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.877 [2024-06-10 09:56:06.512368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.877 [2024-06-10 09:56:06.512390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:12.877 [2024-06-10 09:56:06.512402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:12.877 [2024-06-10 09:56:06.512413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.877 [2024-06-10 09:56:06.512444] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:12.877 [2024-06-10 09:56:06.512471] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:12.877 [2024-06-10 09:56:06.512544] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:12.877 [2024-06-10 09:56:06.512565] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:12.877 [2024-06-10 09:56:06.512652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:12.877 [2024-06-10 09:56:06.512667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:12.877 [2024-06-10 09:56:06.512681] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:12.877 [2024-06-10 09:56:06.512696] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:12.877 [2024-06-10 09:56:06.512708] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:12.877 [2024-06-10 09:56:06.512720] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:12.877 [2024-06-10 09:56:06.512731] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:12.877 [2024-06-10 09:56:06.512741] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:12.877 [2024-06-10 09:56:06.512751] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:12.877 [2024-06-10 09:56:06.512763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.877 [2024-06-10 09:56:06.512779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:12.877 [2024-06-10 09:56:06.512791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:18:12.877 [2024-06-10 09:56:06.512801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.877 [2024-06-10 09:56:06.512883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.877 [2024-06-10 09:56:06.512900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:12.877 [2024-06-10 09:56:06.512912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:12.877 [2024-06-10 09:56:06.512922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.877 [2024-06-10 09:56:06.513009] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:12.877 [2024-06-10 09:56:06.513042] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:12.877 [2024-06-10 09:56:06.513056] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.877 [2024-06-10 09:56:06.513073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:12.877 [2024-06-10 09:56:06.513096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:12.877 [2024-06-10 09:56:06.513132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:12.877 [2024-06-10 09:56:06.513143] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.877 [2024-06-10 09:56:06.513162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:12.877 [2024-06-10 09:56:06.513172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:12.877 [2024-06-10 09:56:06.513182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.877 [2024-06-10 09:56:06.513192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:12.877 [2024-06-10 09:56:06.513202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:12.877 [2024-06-10 09:56:06.513211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513221] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:12.877 [2024-06-10 09:56:06.513231] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:12.877 [2024-06-10 09:56:06.513241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:12.877 [2024-06-10 09:56:06.513275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:12.877 [2024-06-10 09:56:06.513285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:12.877 [2024-06-10 09:56:06.513295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:12.877 [2024-06-10 09:56:06.513305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:12.877 [2024-06-10 09:56:06.513325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:12.877 [2024-06-10 09:56:06.513335] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:12.877 [2024-06-10 09:56:06.513355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:12.877 [2024-06-10 09:56:06.513365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:12.877 [2024-06-10 09:56:06.513384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:12.877 [2024-06-10 09:56:06.513395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:12.877 [2024-06-10 09:56:06.513414] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:12.877 [2024-06-10 09:56:06.513424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.877 [2024-06-10 09:56:06.513444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:12.877 [2024-06-10 09:56:06.513454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:12.877 [2024-06-10 09:56:06.513464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.877 [2024-06-10 09:56:06.513474] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:12.877 [2024-06-10 09:56:06.513484] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:12.877 [2024-06-10 09:56:06.513495] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.877 [2024-06-10 09:56:06.513506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.877 [2024-06-10 09:56:06.513517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:12.877 [2024-06-10 09:56:06.513527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:12.877 [2024-06-10 09:56:06.513537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:12.877 [2024-06-10 09:56:06.513547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:12.877 [2024-06-10 09:56:06.513557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:12.877 [2024-06-10 09:56:06.513567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:12.877 [2024-06-10 09:56:06.513578] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:12.877 [2024-06-10 09:56:06.513597] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.877 [2024-06-10 09:56:06.513610] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:12.877 [2024-06-10 09:56:06.513622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:12.877 [2024-06-10 09:56:06.513633] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:12.877 [2024-06-10 09:56:06.513644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:12.877 [2024-06-10 09:56:06.513655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:12.877 [2024-06-10 09:56:06.513666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:12.877 [2024-06-10 09:56:06.513676] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:12.878 [2024-06-10 09:56:06.513687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:12.878 [2024-06-10 09:56:06.513698] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:12.878 [2024-06-10 09:56:06.513709] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:12.878 [2024-06-10 09:56:06.513719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:12.878 [2024-06-10 09:56:06.513730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:12.878 [2024-06-10 09:56:06.513742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:12.878 [2024-06-10 09:56:06.513753] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:12.878 [2024-06-10 09:56:06.513766] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.878 [2024-06-10 09:56:06.513778] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:12.878 [2024-06-10 09:56:06.513790] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:12.878 [2024-06-10 09:56:06.513802] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:12.878 [2024-06-10 09:56:06.513815] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:12.878 [2024-06-10 09:56:06.513827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.513845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:12.878 [2024-06-10 09:56:06.513857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:18:12.878 [2024-06-10 09:56:06.513868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.878 [2024-06-10 09:56:06.531244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.531310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.878 [2024-06-10 09:56:06.531327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.291 ms 00:18:12.878 [2024-06-10 09:56:06.531337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.878 [2024-06-10 09:56:06.531512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.531533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:12.878 [2024-06-10 09:56:06.531546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:12.878 [2024-06-10 09:56:06.531556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.878 [2024-06-10 09:56:06.577724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.577803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.878 [2024-06-10 09:56:06.577839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.134 ms 00:18:12.878 [2024-06-10 09:56:06.577851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.878 [2024-06-10 09:56:06.577988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.578007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.878 [2024-06-10 09:56:06.578019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:12.878 [2024-06-10 09:56:06.578030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.878 [2024-06-10 09:56:06.578439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.578469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.878 [2024-06-10 09:56:06.578483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:18:12.878 [2024-06-10 09:56:06.578494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.878 [2024-06-10 09:56:06.578650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.578675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.878 [2024-06-10 09:56:06.578689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:12.878 [2024-06-10 09:56:06.578700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.878 [2024-06-10 09:56:06.595762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.595817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.878 [2024-06-10 09:56:06.595866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.031 ms 00:18:12.878 [2024-06-10 09:56:06.595877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.878 [2024-06-10 09:56:06.611526] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:12.878 [2024-06-10 09:56:06.611623] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:12.878 [2024-06-10 09:56:06.611645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.611658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:12.878 [2024-06-10 09:56:06.611676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.617 ms 00:18:12.878 [2024-06-10 09:56:06.611687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.878 [2024-06-10 09:56:06.640167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.878 [2024-06-10 09:56:06.640237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:12.878 [2024-06-10 09:56:06.640273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.273 ms 00:18:12.878 [2024-06-10 09:56:06.640292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.656044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.656144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:13.139 [2024-06-10 09:56:06.656177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.623 ms 00:18:13.139 [2024-06-10 09:56:06.656188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.670677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.670742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:13.139 [2024-06-10 09:56:06.670772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.407 ms 00:18:13.139 [2024-06-10 09:56:06.670783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.671286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.671316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:13.139 [2024-06-10 09:56:06.671330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:18:13.139 [2024-06-10 09:56:06.671341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.743722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.743836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:13.139 [2024-06-10 09:56:06.743872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.291 ms 00:18:13.139 [2024-06-10 09:56:06.743883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.756165] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:13.139 [2024-06-10 09:56:06.769174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.769247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.139 [2024-06-10 09:56:06.769281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.134 ms 00:18:13.139 [2024-06-10 09:56:06.769292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.769420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.769440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:13.139 [2024-06-10 09:56:06.769452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:13.139 [2024-06-10 09:56:06.769463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.769528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.769565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.139 [2024-06-10 09:56:06.769593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:13.139 [2024-06-10 09:56:06.769604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.771558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.771598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:13.139 [2024-06-10 09:56:06.771612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.922 ms 00:18:13.139 [2024-06-10 09:56:06.771623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.771663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.771678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.139 [2024-06-10 09:56:06.771690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:13.139 [2024-06-10 09:56:06.771721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.771768] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:13.139 [2024-06-10 09:56:06.771784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.771810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:13.139 [2024-06-10 09:56:06.771821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:13.139 [2024-06-10 09:56:06.771832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.801035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.801093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.139 [2024-06-10 09:56:06.801140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.177 ms 00:18:13.139 [2024-06-10 09:56:06.801152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.801272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.139 [2024-06-10 09:56:06.801292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.139 [2024-06-10 09:56:06.801305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:13.139 [2024-06-10 09:56:06.801316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.139 [2024-06-10 09:56:06.802341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.139 [2024-06-10 09:56:06.806552] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.937 ms, result 0 00:18:13.139 [2024-06-10 09:56:06.807352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.139 [2024-06-10 09:56:06.824467] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:23.645  Copying: 28/256 [MB] (28 MBps) Copying: 53/256 [MB] (25 MBps) Copying: 79/256 [MB] (25 MBps) Copying: 105/256 [MB] (25 MBps) Copying: 130/256 [MB] (24 MBps) Copying: 155/256 [MB] (25 MBps) Copying: 181/256 [MB] (25 MBps) Copying: 206/256 [MB] (24 MBps) Copying: 230/256 [MB] (23 MBps) Copying: 255/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-06-10 09:56:17.236283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:23.645 [2024-06-10 09:56:17.251957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.645 [2024-06-10 09:56:17.252022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:23.645 [2024-06-10 09:56:17.252046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:23.645 [2024-06-10 09:56:17.252086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.645 [2024-06-10 09:56:17.252143] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:23.645 [2024-06-10 09:56:17.257080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.645 [2024-06-10 09:56:17.257135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:23.645 [2024-06-10 09:56:17.257155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.905 ms 00:18:23.645 [2024-06-10 09:56:17.257168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.645 [2024-06-10 09:56:17.257586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.645 [2024-06-10 09:56:17.257624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:23.645 [2024-06-10 09:56:17.257641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:18:23.645 [2024-06-10 09:56:17.257660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.645 [2024-06-10 09:56:17.262375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.645 [2024-06-10 09:56:17.262450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:23.645 [2024-06-10 09:56:17.262470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:18:23.645 [2024-06-10 09:56:17.262484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.645 [2024-06-10 09:56:17.272702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.645 [2024-06-10 09:56:17.272754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:23.645 [2024-06-10 09:56:17.272775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.122 ms 00:18:23.645 [2024-06-10 09:56:17.272789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.645 [2024-06-10 09:56:17.310857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.645 [2024-06-10 09:56:17.310929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:23.645 [2024-06-10 09:56:17.310953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.953 ms 00:18:23.645 [2024-06-10 09:56:17.310966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.645 [2024-06-10 09:56:17.333099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.645 [2024-06-10 09:56:17.333184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:23.645 [2024-06-10 09:56:17.333216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.001 ms 00:18:23.645 [2024-06-10 09:56:17.333231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.645 [2024-06-10 09:56:17.333478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.645 [2024-06-10 09:56:17.333519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:23.645 [2024-06-10 09:56:17.333540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:18:23.645 [2024-06-10 09:56:17.333554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.645 [2024-06-10 09:56:17.374054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.645 [2024-06-10 09:56:17.374129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:23.645 [2024-06-10 09:56:17.374173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.466 ms 00:18:23.645 [2024-06-10 09:56:17.374188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.905 [2024-06-10 09:56:17.412967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.905 [2024-06-10 09:56:17.413038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:23.905 [2024-06-10 09:56:17.413061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.671 ms 00:18:23.905 [2024-06-10 09:56:17.413075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.905 [2024-06-10 09:56:17.450544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.905 [2024-06-10 09:56:17.450596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:23.905 [2024-06-10 09:56:17.450617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.354 ms 00:18:23.905 [2024-06-10 09:56:17.450630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.905 [2024-06-10 09:56:17.488172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.905 [2024-06-10 09:56:17.488222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:23.905 [2024-06-10 09:56:17.488242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.404 ms 00:18:23.905 [2024-06-10 09:56:17.488256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.905 [2024-06-10 09:56:17.488355] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:23.905 [2024-06-10 09:56:17.488386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.488975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:23.905 [2024-06-10 09:56:17.489286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.489994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:23.906 [2024-06-10 09:56:17.490248] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:23.906 [2024-06-10 09:56:17.490282] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22a6fe3e-092b-45c9-bec4-df5d368748af 00:18:23.906 [2024-06-10 09:56:17.490297] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:23.906 [2024-06-10 09:56:17.490309] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:23.906 [2024-06-10 09:56:17.490330] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:23.906 [2024-06-10 09:56:17.490354] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:23.906 [2024-06-10 09:56:17.490377] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:23.906 [2024-06-10 09:56:17.490400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:23.906 [2024-06-10 09:56:17.490419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:23.906 [2024-06-10 09:56:17.490432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:23.906 [2024-06-10 09:56:17.490444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:23.906 [2024-06-10 09:56:17.490458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.906 [2024-06-10 09:56:17.490480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:23.906 [2024-06-10 09:56:17.490495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.105 ms 00:18:23.906 [2024-06-10 09:56:17.490508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.906 [2024-06-10 09:56:17.510591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.906 [2024-06-10 09:56:17.510643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:23.906 [2024-06-10 09:56:17.510662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.048 ms 00:18:23.906 [2024-06-10 09:56:17.510676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.906 [2024-06-10 09:56:17.511054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.906 [2024-06-10 09:56:17.511093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:23.906 [2024-06-10 09:56:17.511131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:23.906 [2024-06-10 09:56:17.511146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.906 [2024-06-10 09:56:17.570693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.906 [2024-06-10 09:56:17.570778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:23.906 [2024-06-10 09:56:17.570800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.906 [2024-06-10 09:56:17.570815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.906 [2024-06-10 09:56:17.571004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.906 [2024-06-10 09:56:17.571028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:23.906 [2024-06-10 09:56:17.571044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.906 [2024-06-10 09:56:17.571057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.906 [2024-06-10 09:56:17.571187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.906 [2024-06-10 09:56:17.571221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:23.906 [2024-06-10 09:56:17.571261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.906 [2024-06-10 09:56:17.571288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.906 [2024-06-10 09:56:17.571335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.906 [2024-06-10 09:56:17.571365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:23.906 [2024-06-10 09:56:17.571397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.906 [2024-06-10 09:56:17.571412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.166 [2024-06-10 09:56:17.671692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.166 [2024-06-10 09:56:17.671768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.166 [2024-06-10 09:56:17.671787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.166 [2024-06-10 09:56:17.671799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.166 [2024-06-10 09:56:17.711659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.166 [2024-06-10 09:56:17.711728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.166 [2024-06-10 09:56:17.711746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.166 [2024-06-10 09:56:17.711758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.166 [2024-06-10 09:56:17.711864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.166 [2024-06-10 09:56:17.711884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.166 [2024-06-10 09:56:17.711897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.166 [2024-06-10 09:56:17.711908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.166 [2024-06-10 09:56:17.711944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.166 [2024-06-10 09:56:17.711957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.166 [2024-06-10 09:56:17.711981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.166 [2024-06-10 09:56:17.711992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.166 [2024-06-10 09:56:17.712161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.166 [2024-06-10 09:56:17.712185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.166 [2024-06-10 09:56:17.712199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.166 [2024-06-10 09:56:17.712217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.166 [2024-06-10 09:56:17.712291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.166 [2024-06-10 09:56:17.712312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:24.166 [2024-06-10 09:56:17.712325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.166 [2024-06-10 09:56:17.712345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.166 [2024-06-10 09:56:17.712393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.166 [2024-06-10 09:56:17.712409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.166 [2024-06-10 09:56:17.712423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.166 [2024-06-10 09:56:17.712441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.166 [2024-06-10 09:56:17.712524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.166 [2024-06-10 09:56:17.712564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.166 [2024-06-10 09:56:17.712585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.166 [2024-06-10 09:56:17.712601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.166 [2024-06-10 09:56:17.712787] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.855 ms, result 0 00:18:25.101 00:18:25.101 00:18:25.360 09:56:18 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:25.929 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:25.929 09:56:19 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:25.929 09:56:19 -- ftl/trim.sh@109 -- # fio_kill 00:18:25.929 09:56:19 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:25.929 09:56:19 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:25.929 09:56:19 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:25.929 09:56:19 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:25.929 09:56:19 -- ftl/trim.sh@20 -- # killprocess 73967 00:18:25.929 09:56:19 -- common/autotest_common.sh@926 -- # '[' -z 73967 ']' 00:18:25.929 09:56:19 -- common/autotest_common.sh@930 -- # kill -0 73967 00:18:25.929 Process with pid 73967 is not found 00:18:25.929 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (73967) - No such process 00:18:25.929 09:56:19 -- common/autotest_common.sh@953 -- # echo 'Process with pid 73967 is not found' 00:18:25.929 00:18:25.929 real 1m10.357s 00:18:25.929 user 1m34.650s 00:18:25.929 sys 0m6.358s 00:18:25.929 09:56:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.929 09:56:19 -- common/autotest_common.sh@10 -- # set +x 00:18:25.929 ************************************ 00:18:25.929 END TEST ftl_trim 00:18:25.929 ************************************ 00:18:25.929 09:56:19 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:18:25.929 09:56:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:25.929 09:56:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:25.929 09:56:19 -- common/autotest_common.sh@10 -- # set +x 00:18:25.929 ************************************ 00:18:25.929 START TEST ftl_restore 00:18:25.929 ************************************ 00:18:25.929 09:56:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:18:25.929 * Looking for test storage... 00:18:25.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.929 09:56:19 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:25.929 09:56:19 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:25.929 09:56:19 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.929 09:56:19 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.929 09:56:19 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:25.929 09:56:19 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:25.929 09:56:19 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.929 09:56:19 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:25.929 09:56:19 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:25.929 09:56:19 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.929 09:56:19 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.929 09:56:19 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:25.929 09:56:19 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:25.929 09:56:19 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.929 09:56:19 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.929 09:56:19 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:25.929 09:56:19 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:25.929 09:56:19 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.929 09:56:19 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.929 09:56:19 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:25.929 09:56:19 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:25.929 09:56:19 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.929 09:56:19 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.929 09:56:19 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.929 09:56:19 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.929 09:56:19 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:25.929 09:56:19 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:25.929 09:56:19 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.929 09:56:19 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.929 09:56:19 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.929 09:56:19 -- ftl/restore.sh@13 -- # mktemp -d 00:18:25.929 09:56:19 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.btDOKhEvlt 00:18:25.929 09:56:19 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:25.929 09:56:19 -- ftl/restore.sh@16 -- # case $opt in 00:18:25.929 09:56:19 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:18:25.929 09:56:19 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:25.929 09:56:19 -- ftl/restore.sh@23 -- # shift 2 00:18:25.929 09:56:19 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:18:25.929 09:56:19 -- ftl/restore.sh@25 -- # timeout=240 00:18:25.929 09:56:19 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:25.929 09:56:19 -- ftl/restore.sh@39 -- # svcpid=74235 00:18:25.929 09:56:19 -- ftl/restore.sh@41 -- # waitforlisten 74235 00:18:25.929 09:56:19 -- common/autotest_common.sh@819 -- # '[' -z 74235 ']' 00:18:25.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.929 09:56:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.929 09:56:19 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.929 09:56:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:25.929 09:56:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.929 09:56:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:25.929 09:56:19 -- common/autotest_common.sh@10 -- # set +x 00:18:26.216 [2024-06-10 09:56:19.806010] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:26.216 [2024-06-10 09:56:19.806234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74235 ] 00:18:26.216 [2024-06-10 09:56:19.977036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.474 [2024-06-10 09:56:20.163887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:26.474 [2024-06-10 09:56:20.164196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.849 09:56:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:27.849 09:56:21 -- common/autotest_common.sh@852 -- # return 0 00:18:27.849 09:56:21 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:27.849 09:56:21 -- ftl/common.sh@54 -- # local name=nvme0 00:18:27.849 09:56:21 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:27.849 09:56:21 -- ftl/common.sh@56 -- # local size=103424 00:18:27.849 09:56:21 -- ftl/common.sh@59 -- # local base_bdev 00:18:27.849 09:56:21 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:28.107 09:56:21 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:28.107 09:56:21 -- ftl/common.sh@62 -- # local base_size 00:18:28.107 09:56:21 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:28.107 09:56:21 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:18:28.107 09:56:21 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:28.107 09:56:21 -- common/autotest_common.sh@1359 -- # local bs 00:18:28.107 09:56:21 -- common/autotest_common.sh@1360 -- # local nb 00:18:28.107 09:56:21 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:28.366 09:56:22 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:28.366 { 00:18:28.366 "name": "nvme0n1", 00:18:28.366 "aliases": [ 00:18:28.367 "14994fc2-7120-4874-bcda-7d227b4c3781" 00:18:28.367 ], 00:18:28.367 "product_name": "NVMe disk", 00:18:28.367 "block_size": 4096, 00:18:28.367 "num_blocks": 1310720, 00:18:28.367 "uuid": "14994fc2-7120-4874-bcda-7d227b4c3781", 00:18:28.367 "assigned_rate_limits": { 00:18:28.367 "rw_ios_per_sec": 0, 00:18:28.367 "rw_mbytes_per_sec": 0, 00:18:28.367 "r_mbytes_per_sec": 0, 00:18:28.367 "w_mbytes_per_sec": 0 00:18:28.367 }, 00:18:28.367 "claimed": true, 00:18:28.367 "claim_type": "read_many_write_one", 00:18:28.367 "zoned": false, 00:18:28.367 "supported_io_types": { 00:18:28.367 "read": true, 00:18:28.367 "write": true, 00:18:28.367 "unmap": true, 00:18:28.367 "write_zeroes": true, 00:18:28.367 "flush": true, 00:18:28.367 "reset": true, 00:18:28.367 "compare": true, 00:18:28.367 "compare_and_write": false, 00:18:28.367 "abort": true, 00:18:28.367 "nvme_admin": true, 00:18:28.367 "nvme_io": true 00:18:28.367 }, 00:18:28.367 "driver_specific": { 00:18:28.367 "nvme": [ 00:18:28.367 { 00:18:28.367 "pci_address": "0000:00:07.0", 00:18:28.367 "trid": { 00:18:28.367 "trtype": "PCIe", 00:18:28.367 "traddr": "0000:00:07.0" 00:18:28.367 }, 00:18:28.367 "ctrlr_data": { 00:18:28.367 "cntlid": 0, 00:18:28.367 "vendor_id": "0x1b36", 00:18:28.367 "model_number": "QEMU NVMe Ctrl", 00:18:28.367 "serial_number": "12341", 00:18:28.367 "firmware_revision": "8.0.0", 00:18:28.367 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:28.367 "oacs": { 00:18:28.367 "security": 0, 00:18:28.367 "format": 1, 00:18:28.367 "firmware": 0, 00:18:28.367 "ns_manage": 1 00:18:28.367 }, 00:18:28.367 "multi_ctrlr": false, 00:18:28.367 "ana_reporting": false 00:18:28.367 }, 00:18:28.367 "vs": { 00:18:28.367 "nvme_version": "1.4" 00:18:28.367 }, 00:18:28.367 "ns_data": { 00:18:28.367 "id": 1, 00:18:28.367 "can_share": false 00:18:28.367 } 00:18:28.367 } 00:18:28.367 ], 00:18:28.367 "mp_policy": "active_passive" 00:18:28.367 } 00:18:28.367 } 00:18:28.367 ]' 00:18:28.367 09:56:22 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:28.625 09:56:22 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:28.625 09:56:22 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:28.625 09:56:22 -- common/autotest_common.sh@1363 -- # nb=1310720 00:18:28.625 09:56:22 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:18:28.625 09:56:22 -- common/autotest_common.sh@1367 -- # echo 5120 00:18:28.625 09:56:22 -- ftl/common.sh@63 -- # base_size=5120 00:18:28.625 09:56:22 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:28.625 09:56:22 -- ftl/common.sh@67 -- # clear_lvols 00:18:28.625 09:56:22 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:28.625 09:56:22 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:28.883 09:56:22 -- ftl/common.sh@28 -- # stores=d85a82cd-931e-43be-a5d5-485e66c464d4 00:18:28.883 09:56:22 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:28.883 09:56:22 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d85a82cd-931e-43be-a5d5-485e66c464d4 00:18:29.141 09:56:22 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:29.399 09:56:22 -- ftl/common.sh@68 -- # lvs=521553f9-1a7b-4372-b1dc-bea0a1aafae3 00:18:29.399 09:56:22 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 521553f9-1a7b-4372-b1dc-bea0a1aafae3 00:18:29.658 09:56:23 -- ftl/restore.sh@43 -- # split_bdev=d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:29.658 09:56:23 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:18:29.658 09:56:23 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:29.658 09:56:23 -- ftl/common.sh@35 -- # local name=nvc0 00:18:29.658 09:56:23 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:29.658 09:56:23 -- ftl/common.sh@37 -- # local base_bdev=d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:29.658 09:56:23 -- ftl/common.sh@38 -- # local cache_size= 00:18:29.658 09:56:23 -- ftl/common.sh@41 -- # get_bdev_size d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:29.658 09:56:23 -- common/autotest_common.sh@1357 -- # local bdev_name=d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:29.658 09:56:23 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:29.658 09:56:23 -- common/autotest_common.sh@1359 -- # local bs 00:18:29.658 09:56:23 -- common/autotest_common.sh@1360 -- # local nb 00:18:29.658 09:56:23 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:29.917 09:56:23 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:29.917 { 00:18:29.917 "name": "d798bb41-44af-40f9-a4f9-df335bcecef4", 00:18:29.917 "aliases": [ 00:18:29.917 "lvs/nvme0n1p0" 00:18:29.917 ], 00:18:29.917 "product_name": "Logical Volume", 00:18:29.917 "block_size": 4096, 00:18:29.917 "num_blocks": 26476544, 00:18:29.917 "uuid": "d798bb41-44af-40f9-a4f9-df335bcecef4", 00:18:29.917 "assigned_rate_limits": { 00:18:29.917 "rw_ios_per_sec": 0, 00:18:29.917 "rw_mbytes_per_sec": 0, 00:18:29.917 "r_mbytes_per_sec": 0, 00:18:29.917 "w_mbytes_per_sec": 0 00:18:29.917 }, 00:18:29.917 "claimed": false, 00:18:29.917 "zoned": false, 00:18:29.917 "supported_io_types": { 00:18:29.917 "read": true, 00:18:29.917 "write": true, 00:18:29.917 "unmap": true, 00:18:29.917 "write_zeroes": true, 00:18:29.917 "flush": false, 00:18:29.917 "reset": true, 00:18:29.917 "compare": false, 00:18:29.917 "compare_and_write": false, 00:18:29.917 "abort": false, 00:18:29.917 "nvme_admin": false, 00:18:29.917 "nvme_io": false 00:18:29.917 }, 00:18:29.917 "driver_specific": { 00:18:29.917 "lvol": { 00:18:29.917 "lvol_store_uuid": "521553f9-1a7b-4372-b1dc-bea0a1aafae3", 00:18:29.917 "base_bdev": "nvme0n1", 00:18:29.917 "thin_provision": true, 00:18:29.917 "snapshot": false, 00:18:29.917 "clone": false, 00:18:29.917 "esnap_clone": false 00:18:29.917 } 00:18:29.917 } 00:18:29.917 } 00:18:29.917 ]' 00:18:29.917 09:56:23 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:29.917 09:56:23 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:29.917 09:56:23 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:29.917 09:56:23 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:29.917 09:56:23 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:29.917 09:56:23 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:29.917 09:56:23 -- ftl/common.sh@41 -- # local base_size=5171 00:18:29.917 09:56:23 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:29.917 09:56:23 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:30.484 09:56:24 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:30.484 09:56:24 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:30.484 09:56:24 -- ftl/common.sh@48 -- # get_bdev_size d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:30.484 09:56:24 -- common/autotest_common.sh@1357 -- # local bdev_name=d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:30.484 09:56:24 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:30.484 09:56:24 -- common/autotest_common.sh@1359 -- # local bs 00:18:30.484 09:56:24 -- common/autotest_common.sh@1360 -- # local nb 00:18:30.484 09:56:24 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:30.484 09:56:24 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:30.484 { 00:18:30.484 "name": "d798bb41-44af-40f9-a4f9-df335bcecef4", 00:18:30.484 "aliases": [ 00:18:30.484 "lvs/nvme0n1p0" 00:18:30.484 ], 00:18:30.484 "product_name": "Logical Volume", 00:18:30.484 "block_size": 4096, 00:18:30.484 "num_blocks": 26476544, 00:18:30.484 "uuid": "d798bb41-44af-40f9-a4f9-df335bcecef4", 00:18:30.484 "assigned_rate_limits": { 00:18:30.484 "rw_ios_per_sec": 0, 00:18:30.484 "rw_mbytes_per_sec": 0, 00:18:30.484 "r_mbytes_per_sec": 0, 00:18:30.484 "w_mbytes_per_sec": 0 00:18:30.484 }, 00:18:30.484 "claimed": false, 00:18:30.484 "zoned": false, 00:18:30.484 "supported_io_types": { 00:18:30.484 "read": true, 00:18:30.484 "write": true, 00:18:30.484 "unmap": true, 00:18:30.484 "write_zeroes": true, 00:18:30.484 "flush": false, 00:18:30.484 "reset": true, 00:18:30.484 "compare": false, 00:18:30.484 "compare_and_write": false, 00:18:30.484 "abort": false, 00:18:30.484 "nvme_admin": false, 00:18:30.484 "nvme_io": false 00:18:30.484 }, 00:18:30.484 "driver_specific": { 00:18:30.484 "lvol": { 00:18:30.484 "lvol_store_uuid": "521553f9-1a7b-4372-b1dc-bea0a1aafae3", 00:18:30.484 "base_bdev": "nvme0n1", 00:18:30.484 "thin_provision": true, 00:18:30.484 "snapshot": false, 00:18:30.484 "clone": false, 00:18:30.484 "esnap_clone": false 00:18:30.484 } 00:18:30.484 } 00:18:30.484 } 00:18:30.484 ]' 00:18:30.484 09:56:24 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:30.743 09:56:24 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:30.743 09:56:24 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:30.743 09:56:24 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:30.743 09:56:24 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:30.743 09:56:24 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:30.743 09:56:24 -- ftl/common.sh@48 -- # cache_size=5171 00:18:30.743 09:56:24 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:31.002 09:56:24 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:31.002 09:56:24 -- ftl/restore.sh@48 -- # get_bdev_size d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:31.002 09:56:24 -- common/autotest_common.sh@1357 -- # local bdev_name=d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:31.002 09:56:24 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:31.002 09:56:24 -- common/autotest_common.sh@1359 -- # local bs 00:18:31.002 09:56:24 -- common/autotest_common.sh@1360 -- # local nb 00:18:31.002 09:56:24 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d798bb41-44af-40f9-a4f9-df335bcecef4 00:18:31.261 09:56:24 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:31.261 { 00:18:31.261 "name": "d798bb41-44af-40f9-a4f9-df335bcecef4", 00:18:31.261 "aliases": [ 00:18:31.261 "lvs/nvme0n1p0" 00:18:31.261 ], 00:18:31.261 "product_name": "Logical Volume", 00:18:31.261 "block_size": 4096, 00:18:31.261 "num_blocks": 26476544, 00:18:31.261 "uuid": "d798bb41-44af-40f9-a4f9-df335bcecef4", 00:18:31.261 "assigned_rate_limits": { 00:18:31.261 "rw_ios_per_sec": 0, 00:18:31.261 "rw_mbytes_per_sec": 0, 00:18:31.261 "r_mbytes_per_sec": 0, 00:18:31.261 "w_mbytes_per_sec": 0 00:18:31.261 }, 00:18:31.261 "claimed": false, 00:18:31.261 "zoned": false, 00:18:31.261 "supported_io_types": { 00:18:31.261 "read": true, 00:18:31.261 "write": true, 00:18:31.261 "unmap": true, 00:18:31.261 "write_zeroes": true, 00:18:31.261 "flush": false, 00:18:31.261 "reset": true, 00:18:31.261 "compare": false, 00:18:31.261 "compare_and_write": false, 00:18:31.261 "abort": false, 00:18:31.261 "nvme_admin": false, 00:18:31.261 "nvme_io": false 00:18:31.261 }, 00:18:31.261 "driver_specific": { 00:18:31.261 "lvol": { 00:18:31.261 "lvol_store_uuid": "521553f9-1a7b-4372-b1dc-bea0a1aafae3", 00:18:31.261 "base_bdev": "nvme0n1", 00:18:31.261 "thin_provision": true, 00:18:31.261 "snapshot": false, 00:18:31.261 "clone": false, 00:18:31.261 "esnap_clone": false 00:18:31.261 } 00:18:31.261 } 00:18:31.261 } 00:18:31.261 ]' 00:18:31.261 09:56:24 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:31.261 09:56:24 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:31.261 09:56:24 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:31.261 09:56:24 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:31.261 09:56:24 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:31.261 09:56:24 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:31.261 09:56:24 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:31.261 09:56:24 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d798bb41-44af-40f9-a4f9-df335bcecef4 --l2p_dram_limit 10' 00:18:31.261 09:56:24 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:31.261 09:56:24 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:18:31.261 09:56:24 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:31.261 09:56:24 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:31.261 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:31.261 09:56:24 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d798bb41-44af-40f9-a4f9-df335bcecef4 --l2p_dram_limit 10 -c nvc0n1p0 00:18:31.521 [2024-06-10 09:56:25.231866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.521 [2024-06-10 09:56:25.231928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:31.521 [2024-06-10 09:56:25.231954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:31.521 [2024-06-10 09:56:25.231969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.521 [2024-06-10 09:56:25.232055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.521 [2024-06-10 09:56:25.232075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:31.521 [2024-06-10 09:56:25.232093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:31.521 [2024-06-10 09:56:25.232125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.521 [2024-06-10 09:56:25.232165] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:31.521 [2024-06-10 09:56:25.233171] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:31.521 [2024-06-10 09:56:25.233209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.521 [2024-06-10 09:56:25.233225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:31.521 [2024-06-10 09:56:25.233241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:18:31.521 [2024-06-10 09:56:25.233254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.521 [2024-06-10 09:56:25.233400] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6ad7ca36-a162-4dc9-ba9a-04abdb426008 00:18:31.521 [2024-06-10 09:56:25.234546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.521 [2024-06-10 09:56:25.234590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:31.521 [2024-06-10 09:56:25.234608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:31.521 [2024-06-10 09:56:25.234623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.521 [2024-06-10 09:56:25.239615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.521 [2024-06-10 09:56:25.239675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:31.522 [2024-06-10 09:56:25.239694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.930 ms 00:18:31.522 [2024-06-10 09:56:25.239710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.522 [2024-06-10 09:56:25.239853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.522 [2024-06-10 09:56:25.239879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:31.522 [2024-06-10 09:56:25.239895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:31.522 [2024-06-10 09:56:25.239916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.522 [2024-06-10 09:56:25.240007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.522 [2024-06-10 09:56:25.240030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:31.522 [2024-06-10 09:56:25.240045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:31.522 [2024-06-10 09:56:25.240064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.522 [2024-06-10 09:56:25.240121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:31.522 [2024-06-10 09:56:25.244847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.522 [2024-06-10 09:56:25.244893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:31.522 [2024-06-10 09:56:25.244914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.754 ms 00:18:31.522 [2024-06-10 09:56:25.244928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.522 [2024-06-10 09:56:25.244987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.522 [2024-06-10 09:56:25.245004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:31.522 [2024-06-10 09:56:25.245021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:31.522 [2024-06-10 09:56:25.245034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.522 [2024-06-10 09:56:25.245122] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:31.522 [2024-06-10 09:56:25.245282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:31.522 [2024-06-10 09:56:25.245311] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:31.522 [2024-06-10 09:56:25.245328] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:31.522 [2024-06-10 09:56:25.245348] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:31.522 [2024-06-10 09:56:25.245363] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:31.522 [2024-06-10 09:56:25.245379] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:31.522 [2024-06-10 09:56:25.245391] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:31.522 [2024-06-10 09:56:25.245406] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:31.522 [2024-06-10 09:56:25.245423] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:31.522 [2024-06-10 09:56:25.245440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.522 [2024-06-10 09:56:25.245453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:31.522 [2024-06-10 09:56:25.245485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:18:31.522 [2024-06-10 09:56:25.245504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.522 [2024-06-10 09:56:25.245599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.522 [2024-06-10 09:56:25.245617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:31.522 [2024-06-10 09:56:25.245644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:31.522 [2024-06-10 09:56:25.245656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.522 [2024-06-10 09:56:25.245784] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:31.522 [2024-06-10 09:56:25.245802] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:31.522 [2024-06-10 09:56:25.245827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.522 [2024-06-10 09:56:25.245851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.522 [2024-06-10 09:56:25.245876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:31.522 [2024-06-10 09:56:25.245891] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:31.522 [2024-06-10 09:56:25.245905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:31.522 [2024-06-10 09:56:25.245919] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:31.522 [2024-06-10 09:56:25.245933] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:31.522 [2024-06-10 09:56:25.245945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.522 [2024-06-10 09:56:25.245961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:31.522 [2024-06-10 09:56:25.245974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:31.522 [2024-06-10 09:56:25.245988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.522 [2024-06-10 09:56:25.246001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:31.522 [2024-06-10 09:56:25.246015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:31.522 [2024-06-10 09:56:25.246026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.522 [2024-06-10 09:56:25.246043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:31.522 [2024-06-10 09:56:25.246055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:31.522 [2024-06-10 09:56:25.246069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.522 [2024-06-10 09:56:25.246095] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:31.522 [2024-06-10 09:56:25.246109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:31.522 [2024-06-10 09:56:25.246121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:31.522 [2024-06-10 09:56:25.246134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:31.522 [2024-06-10 09:56:25.246174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:31.522 [2024-06-10 09:56:25.246196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:31.522 [2024-06-10 09:56:25.246225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:31.522 [2024-06-10 09:56:25.246240] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:31.522 [2024-06-10 09:56:25.246252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:31.522 [2024-06-10 09:56:25.246277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:31.522 [2024-06-10 09:56:25.246289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:31.522 [2024-06-10 09:56:25.246303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:31.522 [2024-06-10 09:56:25.246314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:31.522 [2024-06-10 09:56:25.246330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:31.522 [2024-06-10 09:56:25.246342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:31.522 [2024-06-10 09:56:25.246357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:31.522 [2024-06-10 09:56:25.246369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:31.522 [2024-06-10 09:56:25.246384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.522 [2024-06-10 09:56:25.246397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:31.522 [2024-06-10 09:56:25.246411] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:31.522 [2024-06-10 09:56:25.246425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.522 [2024-06-10 09:56:25.246439] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:31.522 [2024-06-10 09:56:25.246452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:31.522 [2024-06-10 09:56:25.246468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.522 [2024-06-10 09:56:25.246488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.522 [2024-06-10 09:56:25.246515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:31.522 [2024-06-10 09:56:25.246531] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:31.522 [2024-06-10 09:56:25.246545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:31.522 [2024-06-10 09:56:25.246557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:31.522 [2024-06-10 09:56:25.246574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:31.522 [2024-06-10 09:56:25.246586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:31.522 [2024-06-10 09:56:25.246617] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:31.522 [2024-06-10 09:56:25.246632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.522 [2024-06-10 09:56:25.246650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:31.522 [2024-06-10 09:56:25.246663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:31.522 [2024-06-10 09:56:25.246678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:31.522 [2024-06-10 09:56:25.246691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:31.522 [2024-06-10 09:56:25.246705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:31.522 [2024-06-10 09:56:25.246737] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:31.522 [2024-06-10 09:56:25.246763] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:31.522 [2024-06-10 09:56:25.246786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:31.522 [2024-06-10 09:56:25.246804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:31.522 [2024-06-10 09:56:25.246817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:31.522 [2024-06-10 09:56:25.246834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:31.522 [2024-06-10 09:56:25.246847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:31.523 [2024-06-10 09:56:25.246864] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:31.523 [2024-06-10 09:56:25.246877] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:31.523 [2024-06-10 09:56:25.246893] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.523 [2024-06-10 09:56:25.246907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:31.523 [2024-06-10 09:56:25.246932] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:31.523 [2024-06-10 09:56:25.246945] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:31.523 [2024-06-10 09:56:25.246960] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:31.523 [2024-06-10 09:56:25.246976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.523 [2024-06-10 09:56:25.246992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:31.523 [2024-06-10 09:56:25.247010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:18:31.523 [2024-06-10 09:56:25.247036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.523 [2024-06-10 09:56:25.266065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.523 [2024-06-10 09:56:25.266171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:31.523 [2024-06-10 09:56:25.266195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.946 ms 00:18:31.523 [2024-06-10 09:56:25.266211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.523 [2024-06-10 09:56:25.266343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.523 [2024-06-10 09:56:25.266366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:31.523 [2024-06-10 09:56:25.266382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:31.523 [2024-06-10 09:56:25.266397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.782 [2024-06-10 09:56:25.307232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.782 [2024-06-10 09:56:25.307313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:31.782 [2024-06-10 09:56:25.307336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.751 ms 00:18:31.782 [2024-06-10 09:56:25.307351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.782 [2024-06-10 09:56:25.307451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.782 [2024-06-10 09:56:25.307478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:31.782 [2024-06-10 09:56:25.307504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:31.782 [2024-06-10 09:56:25.307519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.782 [2024-06-10 09:56:25.307965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.782 [2024-06-10 09:56:25.307995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:31.782 [2024-06-10 09:56:25.308009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:18:31.782 [2024-06-10 09:56:25.308028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.782 [2024-06-10 09:56:25.308228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.782 [2024-06-10 09:56:25.308257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:31.782 [2024-06-10 09:56:25.308272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:18:31.782 [2024-06-10 09:56:25.308288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.782 [2024-06-10 09:56:25.327032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.782 [2024-06-10 09:56:25.327152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:31.782 [2024-06-10 09:56:25.327175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.712 ms 00:18:31.782 [2024-06-10 09:56:25.327190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.782 [2024-06-10 09:56:25.341334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:31.782 [2024-06-10 09:56:25.344353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.782 [2024-06-10 09:56:25.344424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:31.782 [2024-06-10 09:56:25.344447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.001 ms 00:18:31.782 [2024-06-10 09:56:25.344461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.782 [2024-06-10 09:56:25.407512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.782 [2024-06-10 09:56:25.407607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:31.782 [2024-06-10 09:56:25.407634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.972 ms 00:18:31.782 [2024-06-10 09:56:25.407649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.782 [2024-06-10 09:56:25.407792] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:31.782 [2024-06-10 09:56:25.407815] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:34.313 [2024-06-10 09:56:27.559119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.559235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:34.313 [2024-06-10 09:56:27.559263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2151.319 ms 00:18:34.313 [2024-06-10 09:56:27.559279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.559548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.559570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:34.313 [2024-06-10 09:56:27.559588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:18:34.313 [2024-06-10 09:56:27.559601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.593275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.593351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:34.313 [2024-06-10 09:56:27.593378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.548 ms 00:18:34.313 [2024-06-10 09:56:27.593393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.625841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.625912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:34.313 [2024-06-10 09:56:27.625943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.337 ms 00:18:34.313 [2024-06-10 09:56:27.625957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.626441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.626471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:34.313 [2024-06-10 09:56:27.626489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:18:34.313 [2024-06-10 09:56:27.626503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.709199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.709283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:34.313 [2024-06-10 09:56:27.709309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.576 ms 00:18:34.313 [2024-06-10 09:56:27.709323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.743423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.743509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:34.313 [2024-06-10 09:56:27.743535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.980 ms 00:18:34.313 [2024-06-10 09:56:27.743553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.745648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.745690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:34.313 [2024-06-10 09:56:27.745714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:18:34.313 [2024-06-10 09:56:27.745728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.779962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.780035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:34.313 [2024-06-10 09:56:27.780060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.106 ms 00:18:34.313 [2024-06-10 09:56:27.780074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.780200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.780222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:34.313 [2024-06-10 09:56:27.780240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:34.313 [2024-06-10 09:56:27.780253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.780411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-06-10 09:56:27.780432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:34.313 [2024-06-10 09:56:27.780452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:34.313 [2024-06-10 09:56:27.780466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-06-10 09:56:27.781724] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2549.295 ms, result 0 00:18:34.313 { 00:18:34.313 "name": "ftl0", 00:18:34.313 "uuid": "6ad7ca36-a162-4dc9-ba9a-04abdb426008" 00:18:34.313 } 00:18:34.314 09:56:27 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:34.314 09:56:27 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:34.573 09:56:28 -- ftl/restore.sh@63 -- # echo ']}' 00:18:34.573 09:56:28 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:34.833 [2024-06-10 09:56:28.345128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.345209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:34.833 [2024-06-10 09:56:28.345238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:34.833 [2024-06-10 09:56:28.345254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.345294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:34.833 [2024-06-10 09:56:28.348837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.348885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:34.833 [2024-06-10 09:56:28.348904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.512 ms 00:18:34.833 [2024-06-10 09:56:28.348916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.349317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.349339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:34.833 [2024-06-10 09:56:28.349359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:18:34.833 [2024-06-10 09:56:28.349372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.352841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.352871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:34.833 [2024-06-10 09:56:28.352889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:18:34.833 [2024-06-10 09:56:28.352902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.359786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.359839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:34.833 [2024-06-10 09:56:28.359859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.834 ms 00:18:34.833 [2024-06-10 09:56:28.359876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.393191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.393285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:34.833 [2024-06-10 09:56:28.393312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.158 ms 00:18:34.833 [2024-06-10 09:56:28.393326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.413573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.413649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:34.833 [2024-06-10 09:56:28.413675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.144 ms 00:18:34.833 [2024-06-10 09:56:28.413689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.413959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.413982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:34.833 [2024-06-10 09:56:28.414006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:18:34.833 [2024-06-10 09:56:28.414019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.447697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.447772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:34.833 [2024-06-10 09:56:28.447797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.630 ms 00:18:34.833 [2024-06-10 09:56:28.447811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.481750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.833 [2024-06-10 09:56:28.481830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:34.833 [2024-06-10 09:56:28.481856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.802 ms 00:18:34.833 [2024-06-10 09:56:28.481870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.833 [2024-06-10 09:56:28.515205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.834 [2024-06-10 09:56:28.515274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:34.834 [2024-06-10 09:56:28.515298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.178 ms 00:18:34.834 [2024-06-10 09:56:28.515312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.834 [2024-06-10 09:56:28.548107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.834 [2024-06-10 09:56:28.548226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:34.834 [2024-06-10 09:56:28.548251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.588 ms 00:18:34.834 [2024-06-10 09:56:28.548265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.834 [2024-06-10 09:56:28.548366] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:34.834 [2024-06-10 09:56:28.548394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.548985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:34.834 [2024-06-10 09:56:28.549562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:34.835 [2024-06-10 09:56:28.549908] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:34.835 [2024-06-10 09:56:28.549929] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ad7ca36-a162-4dc9-ba9a-04abdb426008 00:18:34.835 [2024-06-10 09:56:28.549942] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:34.835 [2024-06-10 09:56:28.549957] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:34.835 [2024-06-10 09:56:28.549969] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:34.835 [2024-06-10 09:56:28.549984] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:34.835 [2024-06-10 09:56:28.549997] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:34.835 [2024-06-10 09:56:28.550012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:34.835 [2024-06-10 09:56:28.550025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:34.835 [2024-06-10 09:56:28.550038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:34.835 [2024-06-10 09:56:28.550049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:34.835 [2024-06-10 09:56:28.550067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.835 [2024-06-10 09:56:28.550080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:34.835 [2024-06-10 09:56:28.550096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:18:34.835 [2024-06-10 09:56:28.550130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.835 [2024-06-10 09:56:28.567780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.835 [2024-06-10 09:56:28.567847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:34.835 [2024-06-10 09:56:28.567871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.542 ms 00:18:34.835 [2024-06-10 09:56:28.567884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.835 [2024-06-10 09:56:28.568177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.835 [2024-06-10 09:56:28.568201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:34.835 [2024-06-10 09:56:28.568219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:18:34.835 [2024-06-10 09:56:28.568236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.094 [2024-06-10 09:56:28.628589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.094 [2024-06-10 09:56:28.628679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.094 [2024-06-10 09:56:28.628704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.094 [2024-06-10 09:56:28.628718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.094 [2024-06-10 09:56:28.628817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.094 [2024-06-10 09:56:28.628834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.094 [2024-06-10 09:56:28.628851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.094 [2024-06-10 09:56:28.628869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.094 [2024-06-10 09:56:28.629001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.094 [2024-06-10 09:56:28.629022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.094 [2024-06-10 09:56:28.629040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.094 [2024-06-10 09:56:28.629054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.629085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.095 [2024-06-10 09:56:28.629100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.095 [2024-06-10 09:56:28.629116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.095 [2024-06-10 09:56:28.629152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.734023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.095 [2024-06-10 09:56:28.734086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.095 [2024-06-10 09:56:28.734118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.095 [2024-06-10 09:56:28.734135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.774418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.095 [2024-06-10 09:56:28.774508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.095 [2024-06-10 09:56:28.774531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.095 [2024-06-10 09:56:28.774549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.774670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.095 [2024-06-10 09:56:28.774705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.095 [2024-06-10 09:56:28.774738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.095 [2024-06-10 09:56:28.774751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.774818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.095 [2024-06-10 09:56:28.774836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.095 [2024-06-10 09:56:28.774852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.095 [2024-06-10 09:56:28.774865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.775000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.095 [2024-06-10 09:56:28.775021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.095 [2024-06-10 09:56:28.775038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.095 [2024-06-10 09:56:28.775051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.775147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.095 [2024-06-10 09:56:28.775182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:35.095 [2024-06-10 09:56:28.775224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.095 [2024-06-10 09:56:28.775257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.775312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.095 [2024-06-10 09:56:28.775332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.095 [2024-06-10 09:56:28.775348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.095 [2024-06-10 09:56:28.775361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.775434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.095 [2024-06-10 09:56:28.775454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.095 [2024-06-10 09:56:28.775470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.095 [2024-06-10 09:56:28.775482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.095 [2024-06-10 09:56:28.775650] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 430.497 ms, result 0 00:18:35.095 true 00:18:35.095 09:56:28 -- ftl/restore.sh@66 -- # killprocess 74235 00:18:35.095 09:56:28 -- common/autotest_common.sh@926 -- # '[' -z 74235 ']' 00:18:35.095 09:56:28 -- common/autotest_common.sh@930 -- # kill -0 74235 00:18:35.095 09:56:28 -- common/autotest_common.sh@931 -- # uname 00:18:35.095 09:56:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:35.095 09:56:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74235 00:18:35.095 09:56:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:35.095 killing process with pid 74235 00:18:35.095 09:56:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:35.095 09:56:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74235' 00:18:35.095 09:56:28 -- common/autotest_common.sh@945 -- # kill 74235 00:18:35.095 09:56:28 -- common/autotest_common.sh@950 -- # wait 74235 00:18:39.284 09:56:32 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:44.552 262144+0 records in 00:18:44.552 262144+0 records out 00:18:44.552 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.09231 s, 211 MB/s 00:18:44.552 09:56:37 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:46.453 09:56:39 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:46.453 [2024-06-10 09:56:39.988926] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:46.453 [2024-06-10 09:56:39.989080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74490 ] 00:18:46.453 [2024-06-10 09:56:40.149973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.711 [2024-06-10 09:56:40.351517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.970 [2024-06-10 09:56:40.656646] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.970 [2024-06-10 09:56:40.656747] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:47.228 [2024-06-10 09:56:40.811770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.228 [2024-06-10 09:56:40.811835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:47.228 [2024-06-10 09:56:40.811856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:47.228 [2024-06-10 09:56:40.811868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.228 [2024-06-10 09:56:40.811936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.228 [2024-06-10 09:56:40.811955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.228 [2024-06-10 09:56:40.811968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:47.228 [2024-06-10 09:56:40.811979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.228 [2024-06-10 09:56:40.812010] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:47.229 [2024-06-10 09:56:40.812965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:47.229 [2024-06-10 09:56:40.813002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.813016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.229 [2024-06-10 09:56:40.813029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:18:47.229 [2024-06-10 09:56:40.813040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.814294] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:47.229 [2024-06-10 09:56:40.831066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.831136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:47.229 [2024-06-10 09:56:40.831163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.773 ms 00:18:47.229 [2024-06-10 09:56:40.831175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.831257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.831278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:47.229 [2024-06-10 09:56:40.831292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:47.229 [2024-06-10 09:56:40.831303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.836005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.836049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.229 [2024-06-10 09:56:40.836065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.599 ms 00:18:47.229 [2024-06-10 09:56:40.836077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.836203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.836226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.229 [2024-06-10 09:56:40.836239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:47.229 [2024-06-10 09:56:40.836250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.836309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.836331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:47.229 [2024-06-10 09:56:40.836343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:47.229 [2024-06-10 09:56:40.836354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.836392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:47.229 [2024-06-10 09:56:40.840704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.840774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.229 [2024-06-10 09:56:40.840790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.325 ms 00:18:47.229 [2024-06-10 09:56:40.840802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.840852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.840869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:47.229 [2024-06-10 09:56:40.840881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:47.229 [2024-06-10 09:56:40.840892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.840938] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:47.229 [2024-06-10 09:56:40.840973] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:47.229 [2024-06-10 09:56:40.841014] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:47.229 [2024-06-10 09:56:40.841034] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:47.229 [2024-06-10 09:56:40.841130] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:47.229 [2024-06-10 09:56:40.841149] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:47.229 [2024-06-10 09:56:40.841163] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:47.229 [2024-06-10 09:56:40.841177] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841191] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841207] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:47.229 [2024-06-10 09:56:40.841218] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:47.229 [2024-06-10 09:56:40.841229] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:47.229 [2024-06-10 09:56:40.841240] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:47.229 [2024-06-10 09:56:40.841251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.841262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:47.229 [2024-06-10 09:56:40.841274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:18:47.229 [2024-06-10 09:56:40.841285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.841356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.229 [2024-06-10 09:56:40.841371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:47.229 [2024-06-10 09:56:40.841386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:47.229 [2024-06-10 09:56:40.841398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.229 [2024-06-10 09:56:40.841509] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:47.229 [2024-06-10 09:56:40.841527] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:47.229 [2024-06-10 09:56:40.841539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:47.229 [2024-06-10 09:56:40.841573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:47.229 [2024-06-10 09:56:40.841604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.229 [2024-06-10 09:56:40.841626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:47.229 [2024-06-10 09:56:40.841636] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:47.229 [2024-06-10 09:56:40.841646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.229 [2024-06-10 09:56:40.841657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:47.229 [2024-06-10 09:56:40.841667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:47.229 [2024-06-10 09:56:40.841677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:47.229 [2024-06-10 09:56:40.841698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:47.229 [2024-06-10 09:56:40.841708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:47.229 [2024-06-10 09:56:40.841728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:47.229 [2024-06-10 09:56:40.841753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:47.229 [2024-06-10 09:56:40.841774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:47.229 [2024-06-10 09:56:40.841805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:47.229 [2024-06-10 09:56:40.841835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841855] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:47.229 [2024-06-10 09:56:40.841865] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:47.229 [2024-06-10 09:56:40.841895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.229 [2024-06-10 09:56:40.841915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:47.229 [2024-06-10 09:56:40.841925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:47.229 [2024-06-10 09:56:40.841935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.229 [2024-06-10 09:56:40.841945] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:47.229 [2024-06-10 09:56:40.841957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:47.229 [2024-06-10 09:56:40.841968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.229 [2024-06-10 09:56:40.841983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.229 [2024-06-10 09:56:40.841994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:47.229 [2024-06-10 09:56:40.842005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:47.229 [2024-06-10 09:56:40.842015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:47.229 [2024-06-10 09:56:40.842026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:47.229 [2024-06-10 09:56:40.842036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:47.230 [2024-06-10 09:56:40.842046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:47.230 [2024-06-10 09:56:40.842057] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:47.230 [2024-06-10 09:56:40.842071] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.230 [2024-06-10 09:56:40.842084] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:47.230 [2024-06-10 09:56:40.842095] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:47.230 [2024-06-10 09:56:40.842122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:47.230 [2024-06-10 09:56:40.842136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:47.230 [2024-06-10 09:56:40.842147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:47.230 [2024-06-10 09:56:40.842159] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:47.230 [2024-06-10 09:56:40.842170] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:47.230 [2024-06-10 09:56:40.842181] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:47.230 [2024-06-10 09:56:40.842191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:47.230 [2024-06-10 09:56:40.842202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:47.230 [2024-06-10 09:56:40.842213] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:47.230 [2024-06-10 09:56:40.842224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:47.230 [2024-06-10 09:56:40.842236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:47.230 [2024-06-10 09:56:40.842247] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:47.230 [2024-06-10 09:56:40.842259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.230 [2024-06-10 09:56:40.842272] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:47.230 [2024-06-10 09:56:40.842283] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:47.230 [2024-06-10 09:56:40.842296] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:47.230 [2024-06-10 09:56:40.842307] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:47.230 [2024-06-10 09:56:40.842320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.842331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:47.230 [2024-06-10 09:56:40.842344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:18:47.230 [2024-06-10 09:56:40.842354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.230 [2024-06-10 09:56:40.860954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.861006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.230 [2024-06-10 09:56:40.861024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.545 ms 00:18:47.230 [2024-06-10 09:56:40.861036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.230 [2024-06-10 09:56:40.861163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.861187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:47.230 [2024-06-10 09:56:40.861200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:47.230 [2024-06-10 09:56:40.861211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.230 [2024-06-10 09:56:40.917201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.917260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.230 [2024-06-10 09:56:40.917281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.915 ms 00:18:47.230 [2024-06-10 09:56:40.917298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.230 [2024-06-10 09:56:40.917374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.917393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.230 [2024-06-10 09:56:40.917406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.230 [2024-06-10 09:56:40.917417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.230 [2024-06-10 09:56:40.917812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.917842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.230 [2024-06-10 09:56:40.917857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:18:47.230 [2024-06-10 09:56:40.917868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.230 [2024-06-10 09:56:40.918025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.918045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.230 [2024-06-10 09:56:40.918057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:47.230 [2024-06-10 09:56:40.918068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.230 [2024-06-10 09:56:40.935305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.935350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.230 [2024-06-10 09:56:40.935368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.210 ms 00:18:47.230 [2024-06-10 09:56:40.935381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.230 [2024-06-10 09:56:40.952079] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:47.230 [2024-06-10 09:56:40.952138] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:47.230 [2024-06-10 09:56:40.952158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.952170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:47.230 [2024-06-10 09:56:40.952183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.624 ms 00:18:47.230 [2024-06-10 09:56:40.952195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.230 [2024-06-10 09:56:40.982475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.230 [2024-06-10 09:56:40.982532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:47.230 [2024-06-10 09:56:40.982566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.231 ms 00:18:47.230 [2024-06-10 09:56:40.982577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.488 [2024-06-10 09:56:40.998608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.488 [2024-06-10 09:56:40.998652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:47.488 [2024-06-10 09:56:40.998669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.981 ms 00:18:47.488 [2024-06-10 09:56:40.998680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.488 [2024-06-10 09:56:41.014472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.488 [2024-06-10 09:56:41.014514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:47.488 [2024-06-10 09:56:41.014531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.747 ms 00:18:47.488 [2024-06-10 09:56:41.014542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.488 [2024-06-10 09:56:41.015014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.015052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:47.489 [2024-06-10 09:56:41.015068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:18:47.489 [2024-06-10 09:56:41.015079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.091679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.091782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:47.489 [2024-06-10 09:56:41.091805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.573 ms 00:18:47.489 [2024-06-10 09:56:41.091817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.105294] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:47.489 [2024-06-10 09:56:41.108402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.108456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:47.489 [2024-06-10 09:56:41.108505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.488 ms 00:18:47.489 [2024-06-10 09:56:41.108517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.108644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.108663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:47.489 [2024-06-10 09:56:41.108681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:47.489 [2024-06-10 09:56:41.108707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.108810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.108842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:47.489 [2024-06-10 09:56:41.108856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:47.489 [2024-06-10 09:56:41.108867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.110865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.110906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:47.489 [2024-06-10 09:56:41.110921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.966 ms 00:18:47.489 [2024-06-10 09:56:41.110937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.110974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.110991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:47.489 [2024-06-10 09:56:41.111004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:47.489 [2024-06-10 09:56:41.111015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.111066] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:47.489 [2024-06-10 09:56:41.111083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.111095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:47.489 [2024-06-10 09:56:41.111155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:47.489 [2024-06-10 09:56:41.111168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.142552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.142649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:47.489 [2024-06-10 09:56:41.142686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.349 ms 00:18:47.489 [2024-06-10 09:56:41.142699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.142850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.489 [2024-06-10 09:56:41.142870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:47.489 [2024-06-10 09:56:41.142884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:47.489 [2024-06-10 09:56:41.142910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.489 [2024-06-10 09:56:41.144380] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.060 ms, result 0 00:19:27.290  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (24 MBps) Copying: 126/1024 [MB] (25 MBps) Copying: 152/1024 [MB] (25 MBps) Copying: 178/1024 [MB] (26 MBps) Copying: 203/1024 [MB] (25 MBps) Copying: 229/1024 [MB] (25 MBps) Copying: 255/1024 [MB] (25 MBps) Copying: 280/1024 [MB] (25 MBps) Copying: 306/1024 [MB] (25 MBps) Copying: 331/1024 [MB] (25 MBps) Copying: 356/1024 [MB] (25 MBps) Copying: 382/1024 [MB] (25 MBps) Copying: 408/1024 [MB] (25 MBps) Copying: 433/1024 [MB] (25 MBps) Copying: 458/1024 [MB] (25 MBps) Copying: 483/1024 [MB] (25 MBps) Copying: 509/1024 [MB] (26 MBps) Copying: 536/1024 [MB] (26 MBps) Copying: 562/1024 [MB] (26 MBps) Copying: 588/1024 [MB] (26 MBps) Copying: 614/1024 [MB] (26 MBps) Copying: 640/1024 [MB] (25 MBps) Copying: 666/1024 [MB] (25 MBps) Copying: 692/1024 [MB] (25 MBps) Copying: 717/1024 [MB] (25 MBps) Copying: 743/1024 [MB] (25 MBps) Copying: 768/1024 [MB] (25 MBps) Copying: 794/1024 [MB] (25 MBps) Copying: 820/1024 [MB] (26 MBps) Copying: 847/1024 [MB] (26 MBps) Copying: 874/1024 [MB] (26 MBps) Copying: 899/1024 [MB] (25 MBps) Copying: 926/1024 [MB] (26 MBps) Copying: 952/1024 [MB] (25 MBps) Copying: 977/1024 [MB] (25 MBps) Copying: 1002/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-06-10 09:57:20.988834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.290 [2024-06-10 09:57:20.988899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:27.290 [2024-06-10 09:57:20.988921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:27.290 [2024-06-10 09:57:20.988933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.290 [2024-06-10 09:57:20.988963] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.290 [2024-06-10 09:57:20.992390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.290 [2024-06-10 09:57:20.992428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:27.290 [2024-06-10 09:57:20.992460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.404 ms 00:19:27.290 [2024-06-10 09:57:20.992471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.290 [2024-06-10 09:57:20.994194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.290 [2024-06-10 09:57:20.994267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:27.290 [2024-06-10 09:57:20.994284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:19:27.290 [2024-06-10 09:57:20.994296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.290 [2024-06-10 09:57:21.010881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.290 [2024-06-10 09:57:21.010925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:27.290 [2024-06-10 09:57:21.010959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.563 ms 00:19:27.290 [2024-06-10 09:57:21.010970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.290 [2024-06-10 09:57:21.017902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.290 [2024-06-10 09:57:21.017946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:27.290 [2024-06-10 09:57:21.017961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.890 ms 00:19:27.290 [2024-06-10 09:57:21.017973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.290 [2024-06-10 09:57:21.049038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.290 [2024-06-10 09:57:21.049078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:27.290 [2024-06-10 09:57:21.049111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.987 ms 00:19:27.290 [2024-06-10 09:57:21.049138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.550 [2024-06-10 09:57:21.067167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.550 [2024-06-10 09:57:21.067221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:27.550 [2024-06-10 09:57:21.067254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.986 ms 00:19:27.550 [2024-06-10 09:57:21.067265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.550 [2024-06-10 09:57:21.067445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.550 [2024-06-10 09:57:21.067467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:27.550 [2024-06-10 09:57:21.067488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:27.550 [2024-06-10 09:57:21.067499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.550 [2024-06-10 09:57:21.097859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.550 [2024-06-10 09:57:21.097900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:27.550 [2024-06-10 09:57:21.097932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.340 ms 00:19:27.550 [2024-06-10 09:57:21.097942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.550 [2024-06-10 09:57:21.128267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.550 [2024-06-10 09:57:21.128307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:27.550 [2024-06-10 09:57:21.128324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.284 ms 00:19:27.550 [2024-06-10 09:57:21.128335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.550 [2024-06-10 09:57:21.160830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.550 [2024-06-10 09:57:21.160874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:27.550 [2024-06-10 09:57:21.160892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.454 ms 00:19:27.550 [2024-06-10 09:57:21.160904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.550 [2024-06-10 09:57:21.192149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.550 [2024-06-10 09:57:21.192212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:27.550 [2024-06-10 09:57:21.192245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.139 ms 00:19:27.550 [2024-06-10 09:57:21.192256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.550 [2024-06-10 09:57:21.192297] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:27.550 [2024-06-10 09:57:21.192321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.192993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.193004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.193015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.193026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.193038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.193050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.193062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.193073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:27.550 [2024-06-10 09:57:21.193084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:27.551 [2024-06-10 09:57:21.193508] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:27.551 [2024-06-10 09:57:21.193519] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ad7ca36-a162-4dc9-ba9a-04abdb426008 00:19:27.551 [2024-06-10 09:57:21.193530] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:27.551 [2024-06-10 09:57:21.193548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:27.551 [2024-06-10 09:57:21.193558] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:27.551 [2024-06-10 09:57:21.193569] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:27.551 [2024-06-10 09:57:21.193579] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:27.551 [2024-06-10 09:57:21.193590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:27.551 [2024-06-10 09:57:21.193600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:27.551 [2024-06-10 09:57:21.193610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:27.551 [2024-06-10 09:57:21.193620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:27.551 [2024-06-10 09:57:21.193631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.551 [2024-06-10 09:57:21.193642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:27.551 [2024-06-10 09:57:21.193654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:19:27.551 [2024-06-10 09:57:21.193665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.551 [2024-06-10 09:57:21.209904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.551 [2024-06-10 09:57:21.209943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:27.551 [2024-06-10 09:57:21.209959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.175 ms 00:19:27.551 [2024-06-10 09:57:21.209970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.551 [2024-06-10 09:57:21.210240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.551 [2024-06-10 09:57:21.210259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:27.551 [2024-06-10 09:57:21.210272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:19:27.551 [2024-06-10 09:57:21.210283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.551 [2024-06-10 09:57:21.254010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.551 [2024-06-10 09:57:21.254057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.551 [2024-06-10 09:57:21.254089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.551 [2024-06-10 09:57:21.254099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.551 [2024-06-10 09:57:21.254168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.551 [2024-06-10 09:57:21.254184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.551 [2024-06-10 09:57:21.254196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.551 [2024-06-10 09:57:21.254206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.551 [2024-06-10 09:57:21.254315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.551 [2024-06-10 09:57:21.254334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.551 [2024-06-10 09:57:21.254347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.551 [2024-06-10 09:57:21.254358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.551 [2024-06-10 09:57:21.254380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.551 [2024-06-10 09:57:21.254394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.551 [2024-06-10 09:57:21.254406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.551 [2024-06-10 09:57:21.254416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.810 [2024-06-10 09:57:21.348336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.810 [2024-06-10 09:57:21.348398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.810 [2024-06-10 09:57:21.348432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.810 [2024-06-10 09:57:21.348443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.810 [2024-06-10 09:57:21.385309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.810 [2024-06-10 09:57:21.385366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.810 [2024-06-10 09:57:21.385398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.810 [2024-06-10 09:57:21.385409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.810 [2024-06-10 09:57:21.385492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.810 [2024-06-10 09:57:21.385516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.810 [2024-06-10 09:57:21.385527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.810 [2024-06-10 09:57:21.385538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.810 [2024-06-10 09:57:21.385588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.810 [2024-06-10 09:57:21.385604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.810 [2024-06-10 09:57:21.385632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.810 [2024-06-10 09:57:21.385642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.810 [2024-06-10 09:57:21.385775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.810 [2024-06-10 09:57:21.385799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.810 [2024-06-10 09:57:21.385811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.810 [2024-06-10 09:57:21.385823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.810 [2024-06-10 09:57:21.385870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.810 [2024-06-10 09:57:21.385899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:27.810 [2024-06-10 09:57:21.385912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.810 [2024-06-10 09:57:21.385923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.810 [2024-06-10 09:57:21.385966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.810 [2024-06-10 09:57:21.385981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.810 [2024-06-10 09:57:21.385998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.810 [2024-06-10 09:57:21.386009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.810 [2024-06-10 09:57:21.386058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.810 [2024-06-10 09:57:21.386073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.810 [2024-06-10 09:57:21.386085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.810 [2024-06-10 09:57:21.386095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.810 [2024-06-10 09:57:21.386247] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.383 ms, result 0 00:19:29.183 00:19:29.183 00:19:29.183 09:57:22 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:29.183 [2024-06-10 09:57:22.710884] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:29.183 [2024-06-10 09:57:22.711087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74920 ] 00:19:29.183 [2024-06-10 09:57:22.878353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.441 [2024-06-10 09:57:23.054109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.699 [2024-06-10 09:57:23.361597] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:29.699 [2024-06-10 09:57:23.361704] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:29.959 [2024-06-10 09:57:23.515202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.959 [2024-06-10 09:57:23.515287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:29.959 [2024-06-10 09:57:23.515324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:29.959 [2024-06-10 09:57:23.515337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.959 [2024-06-10 09:57:23.515419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.959 [2024-06-10 09:57:23.515439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.959 [2024-06-10 09:57:23.515452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:29.959 [2024-06-10 09:57:23.515463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.959 [2024-06-10 09:57:23.515496] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:29.959 [2024-06-10 09:57:23.516447] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:29.959 [2024-06-10 09:57:23.516499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.959 [2024-06-10 09:57:23.516513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.959 [2024-06-10 09:57:23.516525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:19:29.959 [2024-06-10 09:57:23.516537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.959 [2024-06-10 09:57:23.517789] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:29.959 [2024-06-10 09:57:23.534007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.959 [2024-06-10 09:57:23.534068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:29.959 [2024-06-10 09:57:23.534092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.219 ms 00:19:29.959 [2024-06-10 09:57:23.534122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.959 [2024-06-10 09:57:23.534197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.959 [2024-06-10 09:57:23.534216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:29.959 [2024-06-10 09:57:23.534228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:29.959 [2024-06-10 09:57:23.534239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.959 [2024-06-10 09:57:23.538919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.959 [2024-06-10 09:57:23.538977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.960 [2024-06-10 09:57:23.539009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.586 ms 00:19:29.960 [2024-06-10 09:57:23.539021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.960 [2024-06-10 09:57:23.539151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.960 [2024-06-10 09:57:23.539174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.960 [2024-06-10 09:57:23.539187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:29.960 [2024-06-10 09:57:23.539197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.960 [2024-06-10 09:57:23.539255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.960 [2024-06-10 09:57:23.539277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:29.960 [2024-06-10 09:57:23.539289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:29.960 [2024-06-10 09:57:23.539300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.960 [2024-06-10 09:57:23.539339] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:29.960 [2024-06-10 09:57:23.543534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.960 [2024-06-10 09:57:23.543574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.960 [2024-06-10 09:57:23.543590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.207 ms 00:19:29.960 [2024-06-10 09:57:23.543601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.960 [2024-06-10 09:57:23.543644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.960 [2024-06-10 09:57:23.543659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:29.960 [2024-06-10 09:57:23.543672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:29.960 [2024-06-10 09:57:23.543682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.960 [2024-06-10 09:57:23.543727] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:29.960 [2024-06-10 09:57:23.543760] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:29.960 [2024-06-10 09:57:23.543800] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:29.960 [2024-06-10 09:57:23.543820] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:29.960 [2024-06-10 09:57:23.543902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:29.960 [2024-06-10 09:57:23.543920] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:29.960 [2024-06-10 09:57:23.543935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:29.960 [2024-06-10 09:57:23.543950] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:29.960 [2024-06-10 09:57:23.543971] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:29.960 [2024-06-10 09:57:23.543987] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:29.960 [2024-06-10 09:57:23.543998] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:29.960 [2024-06-10 09:57:23.544008] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:29.960 [2024-06-10 09:57:23.544018] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:29.960 [2024-06-10 09:57:23.544030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.960 [2024-06-10 09:57:23.544046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:29.960 [2024-06-10 09:57:23.544066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:19:29.960 [2024-06-10 09:57:23.544085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.960 [2024-06-10 09:57:23.544201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.960 [2024-06-10 09:57:23.544229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:29.960 [2024-06-10 09:57:23.544247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:29.960 [2024-06-10 09:57:23.544259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.960 [2024-06-10 09:57:23.544371] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:29.960 [2024-06-10 09:57:23.544389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:29.960 [2024-06-10 09:57:23.544402] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.960 [2024-06-10 09:57:23.544413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544425] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:29.960 [2024-06-10 09:57:23.544436] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:29.960 [2024-06-10 09:57:23.544457] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:29.960 [2024-06-10 09:57:23.544468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.960 [2024-06-10 09:57:23.544488] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:29.960 [2024-06-10 09:57:23.544499] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:29.960 [2024-06-10 09:57:23.544509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.960 [2024-06-10 09:57:23.544519] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:29.960 [2024-06-10 09:57:23.544529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:29.960 [2024-06-10 09:57:23.544539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544549] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:29.960 [2024-06-10 09:57:23.544559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:29.960 [2024-06-10 09:57:23.544569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:29.960 [2024-06-10 09:57:23.544590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:29.960 [2024-06-10 09:57:23.544614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:29.960 [2024-06-10 09:57:23.544624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:29.960 [2024-06-10 09:57:23.544635] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:29.960 [2024-06-10 09:57:23.544655] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:29.960 [2024-06-10 09:57:23.544665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:29.960 [2024-06-10 09:57:23.544685] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:29.960 [2024-06-10 09:57:23.544695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:29.960 [2024-06-10 09:57:23.544715] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:29.960 [2024-06-10 09:57:23.544728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:29.960 [2024-06-10 09:57:23.544765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:29.960 [2024-06-10 09:57:23.544776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.960 [2024-06-10 09:57:23.544798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:29.960 [2024-06-10 09:57:23.544809] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:29.960 [2024-06-10 09:57:23.544820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.960 [2024-06-10 09:57:23.544830] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:29.960 [2024-06-10 09:57:23.544841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:29.960 [2024-06-10 09:57:23.544852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.960 [2024-06-10 09:57:23.544868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.960 [2024-06-10 09:57:23.544879] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:29.960 [2024-06-10 09:57:23.544890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:29.960 [2024-06-10 09:57:23.544900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:29.960 [2024-06-10 09:57:23.544911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:29.960 [2024-06-10 09:57:23.544920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:29.960 [2024-06-10 09:57:23.544931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:29.960 [2024-06-10 09:57:23.544943] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:29.960 [2024-06-10 09:57:23.544956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.960 [2024-06-10 09:57:23.544969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:29.960 [2024-06-10 09:57:23.544980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:29.960 [2024-06-10 09:57:23.544991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:29.960 [2024-06-10 09:57:23.545002] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:29.960 [2024-06-10 09:57:23.545013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:29.960 [2024-06-10 09:57:23.545024] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:29.960 [2024-06-10 09:57:23.545035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:29.961 [2024-06-10 09:57:23.545047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:29.961 [2024-06-10 09:57:23.545058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:29.961 [2024-06-10 09:57:23.545069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:29.961 [2024-06-10 09:57:23.545080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:29.961 [2024-06-10 09:57:23.545091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:29.961 [2024-06-10 09:57:23.545117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:29.961 [2024-06-10 09:57:23.545131] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:29.961 [2024-06-10 09:57:23.545144] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.961 [2024-06-10 09:57:23.545156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:29.961 [2024-06-10 09:57:23.545167] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:29.961 [2024-06-10 09:57:23.545180] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:29.961 [2024-06-10 09:57:23.545192] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:29.961 [2024-06-10 09:57:23.545204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.545216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:29.961 [2024-06-10 09:57:23.545228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:19:29.961 [2024-06-10 09:57:23.545239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.563538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.563587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.961 [2024-06-10 09:57:23.563605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.241 ms 00:19:29.961 [2024-06-10 09:57:23.563617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.563721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.563743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:29.961 [2024-06-10 09:57:23.563756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:29.961 [2024-06-10 09:57:23.563767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.614230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.614286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.961 [2024-06-10 09:57:23.614321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.378 ms 00:19:29.961 [2024-06-10 09:57:23.614338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.614423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.614440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.961 [2024-06-10 09:57:23.614454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:29.961 [2024-06-10 09:57:23.614465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.614857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.614878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.961 [2024-06-10 09:57:23.614892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:29.961 [2024-06-10 09:57:23.614903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.615057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.615076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.961 [2024-06-10 09:57:23.615089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:19:29.961 [2024-06-10 09:57:23.615100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.632507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.632581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.961 [2024-06-10 09:57:23.632604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.377 ms 00:19:29.961 [2024-06-10 09:57:23.632617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.649266] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:29.961 [2024-06-10 09:57:23.649312] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:29.961 [2024-06-10 09:57:23.649346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.649359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:29.961 [2024-06-10 09:57:23.649373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.539 ms 00:19:29.961 [2024-06-10 09:57:23.649385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.679129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.679178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:29.961 [2024-06-10 09:57:23.679212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.686 ms 00:19:29.961 [2024-06-10 09:57:23.679224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.695780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.695859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:29.961 [2024-06-10 09:57:23.695879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.515 ms 00:19:29.961 [2024-06-10 09:57:23.695891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.711413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.711477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:29.961 [2024-06-10 09:57:23.711495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.426 ms 00:19:29.961 [2024-06-10 09:57:23.711506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.961 [2024-06-10 09:57:23.712008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.961 [2024-06-10 09:57:23.712033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:29.961 [2024-06-10 09:57:23.712046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:19:29.961 [2024-06-10 09:57:23.712058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.787293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.220 [2024-06-10 09:57:23.787380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.220 [2024-06-10 09:57:23.787440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.209 ms 00:19:30.220 [2024-06-10 09:57:23.787453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.800555] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:30.220 [2024-06-10 09:57:23.803566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.220 [2024-06-10 09:57:23.803609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.220 [2024-06-10 09:57:23.803630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.022 ms 00:19:30.220 [2024-06-10 09:57:23.803642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.803764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.220 [2024-06-10 09:57:23.803789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.220 [2024-06-10 09:57:23.803803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:30.220 [2024-06-10 09:57:23.803814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.803912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.220 [2024-06-10 09:57:23.803931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.220 [2024-06-10 09:57:23.803942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:30.220 [2024-06-10 09:57:23.803953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.805899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.220 [2024-06-10 09:57:23.805936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:30.220 [2024-06-10 09:57:23.805971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.917 ms 00:19:30.220 [2024-06-10 09:57:23.805982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.806019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.220 [2024-06-10 09:57:23.806034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.220 [2024-06-10 09:57:23.806046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:30.220 [2024-06-10 09:57:23.806064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.806108] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.220 [2024-06-10 09:57:23.806161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.220 [2024-06-10 09:57:23.806191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.220 [2024-06-10 09:57:23.806202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:30.220 [2024-06-10 09:57:23.806218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.837260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.220 [2024-06-10 09:57:23.837320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.220 [2024-06-10 09:57:23.837355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.015 ms 00:19:30.220 [2024-06-10 09:57:23.837366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.837445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.220 [2024-06-10 09:57:23.837471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.220 [2024-06-10 09:57:23.837483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:30.220 [2024-06-10 09:57:23.837495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.220 [2024-06-10 09:57:23.838672] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 322.979 ms, result 0 00:20:08.829  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (26 MBps) Copying: 130/1024 [MB] (25 MBps) Copying: 157/1024 [MB] (26 MBps) Copying: 184/1024 [MB] (27 MBps) Copying: 210/1024 [MB] (26 MBps) Copying: 238/1024 [MB] (27 MBps) Copying: 265/1024 [MB] (27 MBps) Copying: 293/1024 [MB] (27 MBps) Copying: 320/1024 [MB] (27 MBps) Copying: 347/1024 [MB] (27 MBps) Copying: 374/1024 [MB] (27 MBps) Copying: 401/1024 [MB] (26 MBps) Copying: 429/1024 [MB] (27 MBps) Copying: 457/1024 [MB] (28 MBps) Copying: 484/1024 [MB] (27 MBps) Copying: 512/1024 [MB] (27 MBps) Copying: 539/1024 [MB] (27 MBps) Copying: 567/1024 [MB] (27 MBps) Copying: 595/1024 [MB] (27 MBps) Copying: 622/1024 [MB] (27 MBps) Copying: 650/1024 [MB] (27 MBps) Copying: 676/1024 [MB] (26 MBps) Copying: 704/1024 [MB] (27 MBps) Copying: 732/1024 [MB] (27 MBps) Copying: 759/1024 [MB] (26 MBps) Copying: 786/1024 [MB] (27 MBps) Copying: 813/1024 [MB] (27 MBps) Copying: 838/1024 [MB] (24 MBps) Copying: 863/1024 [MB] (24 MBps) Copying: 888/1024 [MB] (25 MBps) Copying: 913/1024 [MB] (24 MBps) Copying: 937/1024 [MB] (24 MBps) Copying: 962/1024 [MB] (24 MBps) Copying: 987/1024 [MB] (24 MBps) Copying: 1014/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-06-10 09:58:02.505446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.829 [2024-06-10 09:58:02.506082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:08.829 [2024-06-10 09:58:02.506301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:08.829 [2024-06-10 09:58:02.506394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.829 [2024-06-10 09:58:02.506569] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.829 [2024-06-10 09:58:02.510859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.829 [2024-06-10 09:58:02.511047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:08.829 [2024-06-10 09:58:02.511209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.081 ms 00:20:08.829 [2024-06-10 09:58:02.511247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.829 [2024-06-10 09:58:02.511604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.829 [2024-06-10 09:58:02.511632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:08.829 [2024-06-10 09:58:02.511648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:20:08.829 [2024-06-10 09:58:02.511661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.829 [2024-06-10 09:58:02.516216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.829 [2024-06-10 09:58:02.516252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:08.829 [2024-06-10 09:58:02.516268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.531 ms 00:20:08.829 [2024-06-10 09:58:02.516282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.829 [2024-06-10 09:58:02.523712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.829 [2024-06-10 09:58:02.523761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:08.829 [2024-06-10 09:58:02.523777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.395 ms 00:20:08.829 [2024-06-10 09:58:02.523789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.829 [2024-06-10 09:58:02.554935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.829 [2024-06-10 09:58:02.554998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:08.829 [2024-06-10 09:58:02.555015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.051 ms 00:20:08.829 [2024-06-10 09:58:02.555026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.829 [2024-06-10 09:58:02.572457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.829 [2024-06-10 09:58:02.572526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:08.829 [2024-06-10 09:58:02.572561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.404 ms 00:20:08.829 [2024-06-10 09:58:02.572573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.829 [2024-06-10 09:58:02.572729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.829 [2024-06-10 09:58:02.572757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:08.829 [2024-06-10 09:58:02.572771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:08.829 [2024-06-10 09:58:02.572782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.088 [2024-06-10 09:58:02.605897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.088 [2024-06-10 09:58:02.605965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:09.088 [2024-06-10 09:58:02.605982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.092 ms 00:20:09.088 [2024-06-10 09:58:02.605995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.088 [2024-06-10 09:58:02.638587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.088 [2024-06-10 09:58:02.638635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:09.088 [2024-06-10 09:58:02.638652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.560 ms 00:20:09.088 [2024-06-10 09:58:02.638664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.088 [2024-06-10 09:58:02.670323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.088 [2024-06-10 09:58:02.670378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:09.088 [2024-06-10 09:58:02.670411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.626 ms 00:20:09.088 [2024-06-10 09:58:02.670421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.088 [2024-06-10 09:58:02.700197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.088 [2024-06-10 09:58:02.700252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:09.088 [2024-06-10 09:58:02.700287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.686 ms 00:20:09.088 [2024-06-10 09:58:02.700298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.088 [2024-06-10 09:58:02.700329] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:09.088 [2024-06-10 09:58:02.700349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.700996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:09.088 [2024-06-10 09:58:02.701466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:09.089 [2024-06-10 09:58:02.701480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:09.089 [2024-06-10 09:58:02.701492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:09.089 [2024-06-10 09:58:02.701506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:09.089 [2024-06-10 09:58:02.701518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:09.089 [2024-06-10 09:58:02.701529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:09.089 [2024-06-10 09:58:02.701541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:09.089 [2024-06-10 09:58:02.701552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:09.089 [2024-06-10 09:58:02.701572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:09.089 [2024-06-10 09:58:02.701584] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ad7ca36-a162-4dc9-ba9a-04abdb426008 00:20:09.089 [2024-06-10 09:58:02.701603] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:09.089 [2024-06-10 09:58:02.701613] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:09.089 [2024-06-10 09:58:02.701624] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:09.089 [2024-06-10 09:58:02.701635] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:09.089 [2024-06-10 09:58:02.701646] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:09.089 [2024-06-10 09:58:02.701657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:09.089 [2024-06-10 09:58:02.701667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:09.089 [2024-06-10 09:58:02.701677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:09.089 [2024-06-10 09:58:02.701687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:09.089 [2024-06-10 09:58:02.701698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.089 [2024-06-10 09:58:02.701710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:09.089 [2024-06-10 09:58:02.701721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:20:09.089 [2024-06-10 09:58:02.701745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.089 [2024-06-10 09:58:02.717932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.089 [2024-06-10 09:58:02.717987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:09.089 [2024-06-10 09:58:02.718004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.118 ms 00:20:09.089 [2024-06-10 09:58:02.718016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.089 [2024-06-10 09:58:02.718270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.089 [2024-06-10 09:58:02.718288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:09.089 [2024-06-10 09:58:02.718302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:20:09.089 [2024-06-10 09:58:02.718320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.089 [2024-06-10 09:58:02.765039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.089 [2024-06-10 09:58:02.765116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.089 [2024-06-10 09:58:02.765136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.089 [2024-06-10 09:58:02.765148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.089 [2024-06-10 09:58:02.765227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.089 [2024-06-10 09:58:02.765243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:09.089 [2024-06-10 09:58:02.765255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.089 [2024-06-10 09:58:02.765273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.089 [2024-06-10 09:58:02.765386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.089 [2024-06-10 09:58:02.765406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:09.089 [2024-06-10 09:58:02.765418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.089 [2024-06-10 09:58:02.765430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.089 [2024-06-10 09:58:02.765453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.089 [2024-06-10 09:58:02.765466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:09.089 [2024-06-10 09:58:02.765477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.089 [2024-06-10 09:58:02.765488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.347 [2024-06-10 09:58:02.865385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.347 [2024-06-10 09:58:02.865446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:09.347 [2024-06-10 09:58:02.865482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.347 [2024-06-10 09:58:02.865494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.347 [2024-06-10 09:58:02.905502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.347 [2024-06-10 09:58:02.905559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:09.347 [2024-06-10 09:58:02.905579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.347 [2024-06-10 09:58:02.905592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.347 [2024-06-10 09:58:02.905705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.347 [2024-06-10 09:58:02.905724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:09.347 [2024-06-10 09:58:02.905736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.347 [2024-06-10 09:58:02.905747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.347 [2024-06-10 09:58:02.905803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.347 [2024-06-10 09:58:02.905819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:09.347 [2024-06-10 09:58:02.905836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.347 [2024-06-10 09:58:02.905847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.347 [2024-06-10 09:58:02.905976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.347 [2024-06-10 09:58:02.905995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:09.347 [2024-06-10 09:58:02.906008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.347 [2024-06-10 09:58:02.906018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.347 [2024-06-10 09:58:02.906074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.347 [2024-06-10 09:58:02.906091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:09.347 [2024-06-10 09:58:02.906127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.347 [2024-06-10 09:58:02.906142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.347 [2024-06-10 09:58:02.906187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.347 [2024-06-10 09:58:02.906209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:09.347 [2024-06-10 09:58:02.906221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.347 [2024-06-10 09:58:02.906232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.347 [2024-06-10 09:58:02.906282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.347 [2024-06-10 09:58:02.906299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:09.347 [2024-06-10 09:58:02.906310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.347 [2024-06-10 09:58:02.906321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.347 [2024-06-10 09:58:02.906475] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.994 ms, result 0 00:20:10.279 00:20:10.279 00:20:10.279 09:58:04 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:12.810 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:12.810 09:58:06 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:12.810 [2024-06-10 09:58:06.286415] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:12.810 [2024-06-10 09:58:06.286593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75362 ] 00:20:12.810 [2024-06-10 09:58:06.448288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.069 [2024-06-10 09:58:06.673768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.327 [2024-06-10 09:58:06.976420] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.327 [2024-06-10 09:58:06.976496] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.586 [2024-06-10 09:58:07.132988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.133063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:13.587 [2024-06-10 09:58:07.133091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:13.587 [2024-06-10 09:58:07.133133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.133220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.133242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.587 [2024-06-10 09:58:07.133261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:13.587 [2024-06-10 09:58:07.133276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.133315] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:13.587 [2024-06-10 09:58:07.134362] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:13.587 [2024-06-10 09:58:07.134408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.134423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.587 [2024-06-10 09:58:07.134436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:20:13.587 [2024-06-10 09:58:07.134447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.135674] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:13.587 [2024-06-10 09:58:07.152292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.152334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:13.587 [2024-06-10 09:58:07.152374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.619 ms 00:20:13.587 [2024-06-10 09:58:07.152386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.152532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.152556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:13.587 [2024-06-10 09:58:07.152570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:13.587 [2024-06-10 09:58:07.152581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.156890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.156931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.587 [2024-06-10 09:58:07.156962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.212 ms 00:20:13.587 [2024-06-10 09:58:07.156974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.157095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.157116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.587 [2024-06-10 09:58:07.157130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:13.587 [2024-06-10 09:58:07.157163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.157220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.157243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:13.587 [2024-06-10 09:58:07.157256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:13.587 [2024-06-10 09:58:07.157267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.157304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.587 [2024-06-10 09:58:07.161421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.161457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.587 [2024-06-10 09:58:07.161489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.130 ms 00:20:13.587 [2024-06-10 09:58:07.161501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.161544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.161560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:13.587 [2024-06-10 09:58:07.161572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:13.587 [2024-06-10 09:58:07.161584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.161628] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:13.587 [2024-06-10 09:58:07.161660] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:13.587 [2024-06-10 09:58:07.161701] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:13.587 [2024-06-10 09:58:07.161720] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:13.587 [2024-06-10 09:58:07.161803] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:13.587 [2024-06-10 09:58:07.161819] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:13.587 [2024-06-10 09:58:07.161844] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:13.587 [2024-06-10 09:58:07.161859] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:13.587 [2024-06-10 09:58:07.161872] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:13.587 [2024-06-10 09:58:07.161889] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:13.587 [2024-06-10 09:58:07.161900] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:13.587 [2024-06-10 09:58:07.161911] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:13.587 [2024-06-10 09:58:07.161922] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:13.587 [2024-06-10 09:58:07.161934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.161946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:13.587 [2024-06-10 09:58:07.161957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:20:13.587 [2024-06-10 09:58:07.161969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.162040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.587 [2024-06-10 09:58:07.162055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:13.587 [2024-06-10 09:58:07.162071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:13.587 [2024-06-10 09:58:07.162082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.587 [2024-06-10 09:58:07.162198] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:13.587 [2024-06-10 09:58:07.162218] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:13.587 [2024-06-10 09:58:07.162230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.587 [2024-06-10 09:58:07.162242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.587 [2024-06-10 09:58:07.162253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:13.587 [2024-06-10 09:58:07.162264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:13.587 [2024-06-10 09:58:07.162275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:13.587 [2024-06-10 09:58:07.162286] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:13.587 [2024-06-10 09:58:07.162296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:13.587 [2024-06-10 09:58:07.162307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.587 [2024-06-10 09:58:07.162317] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:13.587 [2024-06-10 09:58:07.162327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:13.587 [2024-06-10 09:58:07.162338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.587 [2024-06-10 09:58:07.162348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:13.587 [2024-06-10 09:58:07.162359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:13.587 [2024-06-10 09:58:07.162374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.587 [2024-06-10 09:58:07.162385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:13.587 [2024-06-10 09:58:07.162395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:13.587 [2024-06-10 09:58:07.162406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.587 [2024-06-10 09:58:07.162416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:13.587 [2024-06-10 09:58:07.162427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:13.587 [2024-06-10 09:58:07.162451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:13.587 [2024-06-10 09:58:07.162462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:13.587 [2024-06-10 09:58:07.162472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:13.587 [2024-06-10 09:58:07.162483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:13.587 [2024-06-10 09:58:07.162493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:13.587 [2024-06-10 09:58:07.162503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:13.587 [2024-06-10 09:58:07.162514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:13.587 [2024-06-10 09:58:07.162524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:13.587 [2024-06-10 09:58:07.162534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:13.588 [2024-06-10 09:58:07.162544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:13.588 [2024-06-10 09:58:07.162555] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:13.588 [2024-06-10 09:58:07.162565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:13.588 [2024-06-10 09:58:07.162575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:13.588 [2024-06-10 09:58:07.162585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:13.588 [2024-06-10 09:58:07.162595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:13.588 [2024-06-10 09:58:07.162606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.588 [2024-06-10 09:58:07.162616] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:13.588 [2024-06-10 09:58:07.162626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:13.588 [2024-06-10 09:58:07.162637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.588 [2024-06-10 09:58:07.162647] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:13.588 [2024-06-10 09:58:07.162658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:13.588 [2024-06-10 09:58:07.162669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.588 [2024-06-10 09:58:07.162684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.588 [2024-06-10 09:58:07.162696] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:13.588 [2024-06-10 09:58:07.162707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:13.588 [2024-06-10 09:58:07.162717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:13.588 [2024-06-10 09:58:07.162730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:13.588 [2024-06-10 09:58:07.162741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:13.588 [2024-06-10 09:58:07.162752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:13.588 [2024-06-10 09:58:07.162763] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:13.588 [2024-06-10 09:58:07.162777] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.588 [2024-06-10 09:58:07.162789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:13.588 [2024-06-10 09:58:07.162801] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:13.588 [2024-06-10 09:58:07.162812] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:13.588 [2024-06-10 09:58:07.162823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:13.588 [2024-06-10 09:58:07.162835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:13.588 [2024-06-10 09:58:07.162846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:13.588 [2024-06-10 09:58:07.162857] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:13.588 [2024-06-10 09:58:07.162868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:13.588 [2024-06-10 09:58:07.162880] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:13.588 [2024-06-10 09:58:07.162891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:13.588 [2024-06-10 09:58:07.162902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:13.588 [2024-06-10 09:58:07.162914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:13.588 [2024-06-10 09:58:07.162926] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:13.588 [2024-06-10 09:58:07.162937] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:13.588 [2024-06-10 09:58:07.162949] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.588 [2024-06-10 09:58:07.162961] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:13.588 [2024-06-10 09:58:07.162973] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:13.588 [2024-06-10 09:58:07.162984] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:13.588 [2024-06-10 09:58:07.162996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:13.588 [2024-06-10 09:58:07.163008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.163019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:13.588 [2024-06-10 09:58:07.163031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:20:13.588 [2024-06-10 09:58:07.163042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.181118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.181167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.588 [2024-06-10 09:58:07.181200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.974 ms 00:20:13.588 [2024-06-10 09:58:07.181212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.181328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.181366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:13.588 [2024-06-10 09:58:07.181379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:13.588 [2024-06-10 09:58:07.181390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.239463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.239539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.588 [2024-06-10 09:58:07.239564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.997 ms 00:20:13.588 [2024-06-10 09:58:07.239586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.239680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.239701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.588 [2024-06-10 09:58:07.239717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.588 [2024-06-10 09:58:07.239731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.240165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.240189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.588 [2024-06-10 09:58:07.240205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:20:13.588 [2024-06-10 09:58:07.240219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.240403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.240424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.588 [2024-06-10 09:58:07.240439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:20:13.588 [2024-06-10 09:58:07.240453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.261068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.261165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.588 [2024-06-10 09:58:07.261201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.580 ms 00:20:13.588 [2024-06-10 09:58:07.261216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.281737] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:13.588 [2024-06-10 09:58:07.281816] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:13.588 [2024-06-10 09:58:07.281851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.281866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:13.588 [2024-06-10 09:58:07.281885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.415 ms 00:20:13.588 [2024-06-10 09:58:07.281910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.320333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.320450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:13.588 [2024-06-10 09:58:07.320478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.271 ms 00:20:13.588 [2024-06-10 09:58:07.320493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.588 [2024-06-10 09:58:07.340567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.588 [2024-06-10 09:58:07.340641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:13.588 [2024-06-10 09:58:07.340675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.891 ms 00:20:13.588 [2024-06-10 09:58:07.340689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.360086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.360197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:13.848 [2024-06-10 09:58:07.360223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.323 ms 00:20:13.848 [2024-06-10 09:58:07.360238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.361065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.361125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:13.848 [2024-06-10 09:58:07.361147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:20:13.848 [2024-06-10 09:58:07.361161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.442657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.442728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:13.848 [2024-06-10 09:58:07.442764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.462 ms 00:20:13.848 [2024-06-10 09:58:07.442776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.455694] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:13.848 [2024-06-10 09:58:07.458401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.458437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:13.848 [2024-06-10 09:58:07.458456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.548 ms 00:20:13.848 [2024-06-10 09:58:07.458468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.458585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.458608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:13.848 [2024-06-10 09:58:07.458622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:13.848 [2024-06-10 09:58:07.458634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.458721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.458739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:13.848 [2024-06-10 09:58:07.458752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:13.848 [2024-06-10 09:58:07.458763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.460715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.460756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:13.848 [2024-06-10 09:58:07.460777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.923 ms 00:20:13.848 [2024-06-10 09:58:07.460788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.460827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.460842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:13.848 [2024-06-10 09:58:07.460855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:13.848 [2024-06-10 09:58:07.460874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.460920] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:13.848 [2024-06-10 09:58:07.460937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.460949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:13.848 [2024-06-10 09:58:07.460961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:13.848 [2024-06-10 09:58:07.460976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.491697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.491782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:13.848 [2024-06-10 09:58:07.491818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.692 ms 00:20:13.848 [2024-06-10 09:58:07.491830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.491939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.848 [2024-06-10 09:58:07.491982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:13.848 [2024-06-10 09:58:07.492013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:13.848 [2024-06-10 09:58:07.492024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.848 [2024-06-10 09:58:07.493246] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 359.705 ms, result 0 00:20:54.543  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (25 MBps) Copying: 75/1024 [MB] (26 MBps) Copying: 101/1024 [MB] (25 MBps) Copying: 125/1024 [MB] (24 MBps) Copying: 150/1024 [MB] (24 MBps) Copying: 176/1024 [MB] (25 MBps) Copying: 202/1024 [MB] (26 MBps) Copying: 228/1024 [MB] (25 MBps) Copying: 254/1024 [MB] (25 MBps) Copying: 279/1024 [MB] (25 MBps) Copying: 306/1024 [MB] (26 MBps) Copying: 332/1024 [MB] (26 MBps) Copying: 358/1024 [MB] (25 MBps) Copying: 383/1024 [MB] (25 MBps) Copying: 409/1024 [MB] (25 MBps) Copying: 435/1024 [MB] (25 MBps) Copying: 461/1024 [MB] (26 MBps) Copying: 487/1024 [MB] (26 MBps) Copying: 514/1024 [MB] (27 MBps) Copying: 540/1024 [MB] (26 MBps) Copying: 565/1024 [MB] (25 MBps) Copying: 590/1024 [MB] (25 MBps) Copying: 615/1024 [MB] (24 MBps) Copying: 641/1024 [MB] (25 MBps) Copying: 667/1024 [MB] (26 MBps) Copying: 694/1024 [MB] (26 MBps) Copying: 720/1024 [MB] (25 MBps) Copying: 746/1024 [MB] (26 MBps) Copying: 772/1024 [MB] (25 MBps) Copying: 798/1024 [MB] (26 MBps) Copying: 824/1024 [MB] (26 MBps) Copying: 851/1024 [MB] (26 MBps) Copying: 877/1024 [MB] (26 MBps) Copying: 903/1024 [MB] (26 MBps) Copying: 930/1024 [MB] (26 MBps) Copying: 957/1024 [MB] (26 MBps) Copying: 983/1024 [MB] (26 MBps) Copying: 1009/1024 [MB] (25 MBps) Copying: 1023/1024 [MB] (13 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-06-10 09:58:48.185942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.543 [2024-06-10 09:58:48.186028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:54.543 [2024-06-10 09:58:48.186050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.543 [2024-06-10 09:58:48.186079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.543 [2024-06-10 09:58:48.189242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.543 [2024-06-10 09:58:48.193702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.543 [2024-06-10 09:58:48.193744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:54.543 [2024-06-10 09:58:48.193775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.407 ms 00:20:54.543 [2024-06-10 09:58:48.193787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.543 [2024-06-10 09:58:48.206893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.543 [2024-06-10 09:58:48.206939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:54.543 [2024-06-10 09:58:48.206973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.845 ms 00:20:54.543 [2024-06-10 09:58:48.207001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.543 [2024-06-10 09:58:48.229110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.543 [2024-06-10 09:58:48.229160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:54.543 [2024-06-10 09:58:48.229193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.078 ms 00:20:54.543 [2024-06-10 09:58:48.229205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.543 [2024-06-10 09:58:48.236112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.543 [2024-06-10 09:58:48.236155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:54.543 [2024-06-10 09:58:48.236187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.823 ms 00:20:54.543 [2024-06-10 09:58:48.236198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.543 [2024-06-10 09:58:48.266499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.543 [2024-06-10 09:58:48.266541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:54.543 [2024-06-10 09:58:48.266573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.206 ms 00:20:54.543 [2024-06-10 09:58:48.266600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.543 [2024-06-10 09:58:48.284072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.543 [2024-06-10 09:58:48.284146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:54.543 [2024-06-10 09:58:48.284181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.399 ms 00:20:54.543 [2024-06-10 09:58:48.284193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.803 [2024-06-10 09:58:48.377562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.803 [2024-06-10 09:58:48.377633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:54.803 [2024-06-10 09:58:48.377653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.304 ms 00:20:54.803 [2024-06-10 09:58:48.377666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.803 [2024-06-10 09:58:48.409023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.803 [2024-06-10 09:58:48.409073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:54.803 [2024-06-10 09:58:48.409092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.334 ms 00:20:54.803 [2024-06-10 09:58:48.409113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.803 [2024-06-10 09:58:48.440447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.803 [2024-06-10 09:58:48.440494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:54.803 [2024-06-10 09:58:48.440526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.252 ms 00:20:54.803 [2024-06-10 09:58:48.440538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.803 [2024-06-10 09:58:48.470559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.803 [2024-06-10 09:58:48.470599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:54.803 [2024-06-10 09:58:48.470630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.924 ms 00:20:54.803 [2024-06-10 09:58:48.470642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.803 [2024-06-10 09:58:48.500604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.803 [2024-06-10 09:58:48.500642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:54.803 [2024-06-10 09:58:48.500674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.814 ms 00:20:54.803 [2024-06-10 09:58:48.500685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.803 [2024-06-10 09:58:48.500771] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:54.803 [2024-06-10 09:58:48.500798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121600 / 261120 wr_cnt: 1 state: open 00:20:54.803 [2024-06-10 09:58:48.500813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.500998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:54.803 [2024-06-10 09:58:48.501377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.501993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.502010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.502023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.502035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:54.804 [2024-06-10 09:58:48.502055] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:54.804 [2024-06-10 09:58:48.502067] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ad7ca36-a162-4dc9-ba9a-04abdb426008 00:20:54.804 [2024-06-10 09:58:48.502079] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121600 00:20:54.804 [2024-06-10 09:58:48.502090] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122560 00:20:54.804 [2024-06-10 09:58:48.502102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121600 00:20:54.804 [2024-06-10 09:58:48.502124] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:20:54.804 [2024-06-10 09:58:48.502136] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:54.804 [2024-06-10 09:58:48.502152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:54.804 [2024-06-10 09:58:48.502164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:54.804 [2024-06-10 09:58:48.502175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:54.804 [2024-06-10 09:58:48.502185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:54.804 [2024-06-10 09:58:48.502197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.804 [2024-06-10 09:58:48.502209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:54.804 [2024-06-10 09:58:48.502221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:20:54.804 [2024-06-10 09:58:48.502244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.804 [2024-06-10 09:58:48.518440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.804 [2024-06-10 09:58:48.518476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:54.804 [2024-06-10 09:58:48.518507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.139 ms 00:20:54.804 [2024-06-10 09:58:48.518527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.804 [2024-06-10 09:58:48.518776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.804 [2024-06-10 09:58:48.518800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:54.804 [2024-06-10 09:58:48.518815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:20:54.804 [2024-06-10 09:58:48.518826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.804 [2024-06-10 09:58:48.562850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.804 [2024-06-10 09:58:48.562904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.804 [2024-06-10 09:58:48.562928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.804 [2024-06-10 09:58:48.562940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.804 [2024-06-10 09:58:48.563008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.805 [2024-06-10 09:58:48.563023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.805 [2024-06-10 09:58:48.563035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.805 [2024-06-10 09:58:48.563047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.805 [2024-06-10 09:58:48.563152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.805 [2024-06-10 09:58:48.563172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.805 [2024-06-10 09:58:48.563185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.805 [2024-06-10 09:58:48.563204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.805 [2024-06-10 09:58:48.563228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.805 [2024-06-10 09:58:48.563243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.805 [2024-06-10 09:58:48.563254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.805 [2024-06-10 09:58:48.563265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.064 [2024-06-10 09:58:48.662415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.064 [2024-06-10 09:58:48.662480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.064 [2024-06-10 09:58:48.662506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.064 [2024-06-10 09:58:48.662519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.064 [2024-06-10 09:58:48.701467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.064 [2024-06-10 09:58:48.701519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.064 [2024-06-10 09:58:48.701537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.064 [2024-06-10 09:58:48.701550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.064 [2024-06-10 09:58:48.701645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.064 [2024-06-10 09:58:48.701664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.064 [2024-06-10 09:58:48.701676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.064 [2024-06-10 09:58:48.701687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.064 [2024-06-10 09:58:48.701750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.064 [2024-06-10 09:58:48.701772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.064 [2024-06-10 09:58:48.701785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.064 [2024-06-10 09:58:48.701796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.064 [2024-06-10 09:58:48.701916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.064 [2024-06-10 09:58:48.701944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.064 [2024-06-10 09:58:48.701959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.064 [2024-06-10 09:58:48.701971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.064 [2024-06-10 09:58:48.702026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.064 [2024-06-10 09:58:48.702044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:55.064 [2024-06-10 09:58:48.702057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.064 [2024-06-10 09:58:48.702068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.064 [2024-06-10 09:58:48.702127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.064 [2024-06-10 09:58:48.702153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.064 [2024-06-10 09:58:48.702167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.064 [2024-06-10 09:58:48.702178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.064 [2024-06-10 09:58:48.702235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.064 [2024-06-10 09:58:48.702260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.064 [2024-06-10 09:58:48.702273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.064 [2024-06-10 09:58:48.702284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.064 [2024-06-10 09:58:48.702426] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.055 ms, result 0 00:20:56.967 00:20:56.967 00:20:56.967 09:58:50 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:56.967 [2024-06-10 09:58:50.413419] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:56.967 [2024-06-10 09:58:50.414330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75803 ] 00:20:56.967 [2024-06-10 09:58:50.593874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.226 [2024-06-10 09:58:50.807382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.484 [2024-06-10 09:58:51.113603] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.484 [2024-06-10 09:58:51.113695] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.744 [2024-06-10 09:58:51.265245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.744 [2024-06-10 09:58:51.265303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:57.744 [2024-06-10 09:58:51.265336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:57.744 [2024-06-10 09:58:51.265347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.744 [2024-06-10 09:58:51.265420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.744 [2024-06-10 09:58:51.265436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:57.744 [2024-06-10 09:58:51.265447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:57.744 [2024-06-10 09:58:51.265456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.744 [2024-06-10 09:58:51.265483] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:57.744 [2024-06-10 09:58:51.266546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:57.744 [2024-06-10 09:58:51.266584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.744 [2024-06-10 09:58:51.266613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:57.744 [2024-06-10 09:58:51.266632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:20:57.744 [2024-06-10 09:58:51.266643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.744 [2024-06-10 09:58:51.268037] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:57.744 [2024-06-10 09:58:51.284531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.744 [2024-06-10 09:58:51.284570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:57.744 [2024-06-10 09:58:51.284606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.495 ms 00:20:57.744 [2024-06-10 09:58:51.284617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.744 [2024-06-10 09:58:51.284679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.744 [2024-06-10 09:58:51.284712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:57.744 [2024-06-10 09:58:51.284740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:57.745 [2024-06-10 09:58:51.284766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.745 [2024-06-10 09:58:51.289347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.745 [2024-06-10 09:58:51.289400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:57.745 [2024-06-10 09:58:51.289431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.492 ms 00:20:57.745 [2024-06-10 09:58:51.289441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.745 [2024-06-10 09:58:51.289561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.745 [2024-06-10 09:58:51.289597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:57.745 [2024-06-10 09:58:51.289626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:57.745 [2024-06-10 09:58:51.289637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.745 [2024-06-10 09:58:51.289704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.745 [2024-06-10 09:58:51.289728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:57.745 [2024-06-10 09:58:51.289740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:57.745 [2024-06-10 09:58:51.289751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.745 [2024-06-10 09:58:51.289789] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:57.745 [2024-06-10 09:58:51.293972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.745 [2024-06-10 09:58:51.294015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:57.745 [2024-06-10 09:58:51.294046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:20:57.745 [2024-06-10 09:58:51.294056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.745 [2024-06-10 09:58:51.294116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.745 [2024-06-10 09:58:51.294130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:57.745 [2024-06-10 09:58:51.294155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:57.745 [2024-06-10 09:58:51.294165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.745 [2024-06-10 09:58:51.294248] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:57.745 [2024-06-10 09:58:51.294298] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:57.745 [2024-06-10 09:58:51.294346] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:57.745 [2024-06-10 09:58:51.294367] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:57.745 [2024-06-10 09:58:51.294449] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:57.745 [2024-06-10 09:58:51.294480] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:57.745 [2024-06-10 09:58:51.294496] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:57.745 [2024-06-10 09:58:51.294528] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:57.745 [2024-06-10 09:58:51.294541] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:57.745 [2024-06-10 09:58:51.294559] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:57.745 [2024-06-10 09:58:51.294570] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:57.745 [2024-06-10 09:58:51.294580] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:57.745 [2024-06-10 09:58:51.294591] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:57.745 [2024-06-10 09:58:51.294603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.745 [2024-06-10 09:58:51.294614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:57.745 [2024-06-10 09:58:51.294625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:20:57.745 [2024-06-10 09:58:51.294636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.745 [2024-06-10 09:58:51.294716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.745 [2024-06-10 09:58:51.294740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:57.745 [2024-06-10 09:58:51.294757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:57.745 [2024-06-10 09:58:51.294768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.745 [2024-06-10 09:58:51.294883] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:57.745 [2024-06-10 09:58:51.294909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:57.745 [2024-06-10 09:58:51.294922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.745 [2024-06-10 09:58:51.294935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.745 [2024-06-10 09:58:51.294946] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:57.745 [2024-06-10 09:58:51.294956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:57.745 [2024-06-10 09:58:51.294967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:57.745 [2024-06-10 09:58:51.294977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:57.745 [2024-06-10 09:58:51.294987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:57.745 [2024-06-10 09:58:51.294997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.745 [2024-06-10 09:58:51.295008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:57.745 [2024-06-10 09:58:51.295019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:57.745 [2024-06-10 09:58:51.295029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.745 [2024-06-10 09:58:51.295039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:57.745 [2024-06-10 09:58:51.295050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:57.745 [2024-06-10 09:58:51.295061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.745 [2024-06-10 09:58:51.295071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:57.745 [2024-06-10 09:58:51.295082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:57.745 [2024-06-10 09:58:51.295092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.745 [2024-06-10 09:58:51.295102] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:57.745 [2024-06-10 09:58:51.295134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:57.745 [2024-06-10 09:58:51.295160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:57.745 [2024-06-10 09:58:51.295171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:57.745 [2024-06-10 09:58:51.295182] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:57.745 [2024-06-10 09:58:51.295192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:57.745 [2024-06-10 09:58:51.295202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:57.745 [2024-06-10 09:58:51.295212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:57.745 [2024-06-10 09:58:51.295222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:57.745 [2024-06-10 09:58:51.295232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:57.745 [2024-06-10 09:58:51.295242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:57.745 [2024-06-10 09:58:51.295252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:57.745 [2024-06-10 09:58:51.295263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:57.745 [2024-06-10 09:58:51.295273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:57.745 [2024-06-10 09:58:51.295283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:57.745 [2024-06-10 09:58:51.295293] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:57.745 [2024-06-10 09:58:51.295303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:57.745 [2024-06-10 09:58:51.295313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.745 [2024-06-10 09:58:51.295323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:57.745 [2024-06-10 09:58:51.295334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:57.745 [2024-06-10 09:58:51.295343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.745 [2024-06-10 09:58:51.295354] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:57.745 [2024-06-10 09:58:51.295365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:57.745 [2024-06-10 09:58:51.295375] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.745 [2024-06-10 09:58:51.295391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.745 [2024-06-10 09:58:51.295403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:57.745 [2024-06-10 09:58:51.295414] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:57.745 [2024-06-10 09:58:51.295436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:57.745 [2024-06-10 09:58:51.295449] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:57.745 [2024-06-10 09:58:51.295460] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:57.745 [2024-06-10 09:58:51.295470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:57.745 [2024-06-10 09:58:51.295487] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:57.745 [2024-06-10 09:58:51.295501] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.745 [2024-06-10 09:58:51.295513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:57.745 [2024-06-10 09:58:51.295524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:57.745 [2024-06-10 09:58:51.295536] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:57.745 [2024-06-10 09:58:51.295547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:57.745 [2024-06-10 09:58:51.295558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:57.745 [2024-06-10 09:58:51.295569] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:57.745 [2024-06-10 09:58:51.295580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:57.745 [2024-06-10 09:58:51.295591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:57.746 [2024-06-10 09:58:51.295603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:57.746 [2024-06-10 09:58:51.295614] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:57.746 [2024-06-10 09:58:51.295625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:57.746 [2024-06-10 09:58:51.295637] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:57.746 [2024-06-10 09:58:51.295648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:57.746 [2024-06-10 09:58:51.295659] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:57.746 [2024-06-10 09:58:51.295671] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.746 [2024-06-10 09:58:51.295683] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:57.746 [2024-06-10 09:58:51.295695] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:57.746 [2024-06-10 09:58:51.295706] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:57.746 [2024-06-10 09:58:51.295718] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:57.746 [2024-06-10 09:58:51.295731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.295742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:57.746 [2024-06-10 09:58:51.295753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:20:57.746 [2024-06-10 09:58:51.295764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.313869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.313933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:57.746 [2024-06-10 09:58:51.313952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.046 ms 00:20:57.746 [2024-06-10 09:58:51.313964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.314089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.314126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:57.746 [2024-06-10 09:58:51.314140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:57.746 [2024-06-10 09:58:51.314151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.363645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.363718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.746 [2024-06-10 09:58:51.363740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.405 ms 00:20:57.746 [2024-06-10 09:58:51.363771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.363855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.363872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.746 [2024-06-10 09:58:51.363885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:57.746 [2024-06-10 09:58:51.363895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.364349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.364378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.746 [2024-06-10 09:58:51.364392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:20:57.746 [2024-06-10 09:58:51.364404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.364560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.364588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.746 [2024-06-10 09:58:51.364602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:57.746 [2024-06-10 09:58:51.364613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.381996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.382065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.746 [2024-06-10 09:58:51.382087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.352 ms 00:20:57.746 [2024-06-10 09:58:51.382099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.398679] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:57.746 [2024-06-10 09:58:51.398752] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:57.746 [2024-06-10 09:58:51.398776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.398789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:57.746 [2024-06-10 09:58:51.398811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.482 ms 00:20:57.746 [2024-06-10 09:58:51.398823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.428885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.428936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:57.746 [2024-06-10 09:58:51.428957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.929 ms 00:20:57.746 [2024-06-10 09:58:51.428969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.445200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.445248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:57.746 [2024-06-10 09:58:51.445266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.108 ms 00:20:57.746 [2024-06-10 09:58:51.445277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.460867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.460911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:57.746 [2024-06-10 09:58:51.460928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.515 ms 00:20:57.746 [2024-06-10 09:58:51.460939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.746 [2024-06-10 09:58:51.461505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.746 [2024-06-10 09:58:51.461544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:57.746 [2024-06-10 09:58:51.461559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:20:57.746 [2024-06-10 09:58:51.461571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.539708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.005 [2024-06-10 09:58:51.539782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:58.005 [2024-06-10 09:58:51.539803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.105 ms 00:20:58.005 [2024-06-10 09:58:51.539816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.552724] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:58.005 [2024-06-10 09:58:51.555444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.005 [2024-06-10 09:58:51.555481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:58.005 [2024-06-10 09:58:51.555499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.558 ms 00:20:58.005 [2024-06-10 09:58:51.555510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.555624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.005 [2024-06-10 09:58:51.555646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:58.005 [2024-06-10 09:58:51.555660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:58.005 [2024-06-10 09:58:51.555671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.556959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.005 [2024-06-10 09:58:51.557000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:58.005 [2024-06-10 09:58:51.557015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:20:58.005 [2024-06-10 09:58:51.557027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.558923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.005 [2024-06-10 09:58:51.558961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:58.005 [2024-06-10 09:58:51.558980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:20:58.005 [2024-06-10 09:58:51.558991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.559036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.005 [2024-06-10 09:58:51.559054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:58.005 [2024-06-10 09:58:51.559067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:58.005 [2024-06-10 09:58:51.559082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.559140] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:58.005 [2024-06-10 09:58:51.559159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.005 [2024-06-10 09:58:51.559170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:58.005 [2024-06-10 09:58:51.559182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:58.005 [2024-06-10 09:58:51.559197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.590530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.005 [2024-06-10 09:58:51.590593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:58.005 [2024-06-10 09:58:51.590613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.303 ms 00:20:58.005 [2024-06-10 09:58:51.590625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.590711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.005 [2024-06-10 09:58:51.590736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:58.005 [2024-06-10 09:58:51.590749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:58.005 [2024-06-10 09:58:51.590760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.005 [2024-06-10 09:58:51.598925] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.795 ms, result 0 00:21:38.754  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (24 MBps) Copying: 99/1024 [MB] (25 MBps) Copying: 123/1024 [MB] (24 MBps) Copying: 150/1024 [MB] (26 MBps) Copying: 176/1024 [MB] (26 MBps) Copying: 203/1024 [MB] (27 MBps) Copying: 227/1024 [MB] (24 MBps) Copying: 252/1024 [MB] (24 MBps) Copying: 275/1024 [MB] (23 MBps) Copying: 300/1024 [MB] (24 MBps) Copying: 325/1024 [MB] (24 MBps) Copying: 350/1024 [MB] (25 MBps) Copying: 377/1024 [MB] (27 MBps) Copying: 403/1024 [MB] (26 MBps) Copying: 426/1024 [MB] (22 MBps) Copying: 447/1024 [MB] (21 MBps) Copying: 473/1024 [MB] (25 MBps) Copying: 498/1024 [MB] (24 MBps) Copying: 525/1024 [MB] (27 MBps) Copying: 552/1024 [MB] (26 MBps) Copying: 579/1024 [MB] (26 MBps) Copying: 605/1024 [MB] (26 MBps) Copying: 632/1024 [MB] (26 MBps) Copying: 658/1024 [MB] (25 MBps) Copying: 683/1024 [MB] (24 MBps) Copying: 708/1024 [MB] (25 MBps) Copying: 733/1024 [MB] (25 MBps) Copying: 758/1024 [MB] (24 MBps) Copying: 783/1024 [MB] (25 MBps) Copying: 809/1024 [MB] (25 MBps) Copying: 836/1024 [MB] (27 MBps) Copying: 864/1024 [MB] (27 MBps) Copying: 891/1024 [MB] (27 MBps) Copying: 919/1024 [MB] (27 MBps) Copying: 947/1024 [MB] (27 MBps) Copying: 971/1024 [MB] (24 MBps) Copying: 994/1024 [MB] (22 MBps) Copying: 1019/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-06-10 09:59:32.411498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.754 [2024-06-10 09:59:32.411598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:38.754 [2024-06-10 09:59:32.411631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:38.754 [2024-06-10 09:59:32.411651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.754 [2024-06-10 09:59:32.411722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:38.754 [2024-06-10 09:59:32.417328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.754 [2024-06-10 09:59:32.417361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:38.754 [2024-06-10 09:59:32.417376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.572 ms 00:21:38.754 [2024-06-10 09:59:32.417387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.754 [2024-06-10 09:59:32.417671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.754 [2024-06-10 09:59:32.417691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:38.754 [2024-06-10 09:59:32.417705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:21:38.754 [2024-06-10 09:59:32.417716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.754 [2024-06-10 09:59:32.422377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.754 [2024-06-10 09:59:32.422414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:38.754 [2024-06-10 09:59:32.422430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.633 ms 00:21:38.754 [2024-06-10 09:59:32.422442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.754 [2024-06-10 09:59:32.429336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.754 [2024-06-10 09:59:32.429373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:38.754 [2024-06-10 09:59:32.429389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.849 ms 00:21:38.754 [2024-06-10 09:59:32.429400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.754 [2024-06-10 09:59:32.460743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.754 [2024-06-10 09:59:32.460794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:38.754 [2024-06-10 09:59:32.460813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.273 ms 00:21:38.754 [2024-06-10 09:59:32.460824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.754 [2024-06-10 09:59:32.478901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.754 [2024-06-10 09:59:32.478954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:38.754 [2024-06-10 09:59:32.478971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.027 ms 00:21:38.754 [2024-06-10 09:59:32.478984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.014 [2024-06-10 09:59:32.573828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.014 [2024-06-10 09:59:32.574082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:39.014 [2024-06-10 09:59:32.574242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.782 ms 00:21:39.014 [2024-06-10 09:59:32.574393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.014 [2024-06-10 09:59:32.606694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.014 [2024-06-10 09:59:32.606891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:39.014 [2024-06-10 09:59:32.607020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.222 ms 00:21:39.014 [2024-06-10 09:59:32.607197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.014 [2024-06-10 09:59:32.638426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.014 [2024-06-10 09:59:32.638654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:39.014 [2024-06-10 09:59:32.638778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.040 ms 00:21:39.014 [2024-06-10 09:59:32.638831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.014 [2024-06-10 09:59:32.669830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.014 [2024-06-10 09:59:32.670018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:39.014 [2024-06-10 09:59:32.670163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.849 ms 00:21:39.014 [2024-06-10 09:59:32.670218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.014 [2024-06-10 09:59:32.700941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.014 [2024-06-10 09:59:32.701099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:39.014 [2024-06-10 09:59:32.701244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.573 ms 00:21:39.014 [2024-06-10 09:59:32.701304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.014 [2024-06-10 09:59:32.701380] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:39.014 [2024-06-10 09:59:32.701488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:21:39.014 [2024-06-10 09:59:32.701554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.701999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:39.014 [2024-06-10 09:59:32.702411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.702999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:39.015 [2024-06-10 09:59:32.703019] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:39.015 [2024-06-10 09:59:32.703031] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ad7ca36-a162-4dc9-ba9a-04abdb426008 00:21:39.015 [2024-06-10 09:59:32.703043] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:21:39.015 [2024-06-10 09:59:32.703053] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13248 00:21:39.015 [2024-06-10 09:59:32.703064] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12288 00:21:39.015 [2024-06-10 09:59:32.703076] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0781 00:21:39.015 [2024-06-10 09:59:32.703086] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:39.015 [2024-06-10 09:59:32.703097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:39.015 [2024-06-10 09:59:32.703128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:39.015 [2024-06-10 09:59:32.703139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:39.015 [2024-06-10 09:59:32.703149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:39.015 [2024-06-10 09:59:32.703161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.015 [2024-06-10 09:59:32.703180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:39.015 [2024-06-10 09:59:32.703193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.782 ms 00:21:39.015 [2024-06-10 09:59:32.703204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.015 [2024-06-10 09:59:32.719743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.015 [2024-06-10 09:59:32.719776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:39.015 [2024-06-10 09:59:32.719792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.462 ms 00:21:39.015 [2024-06-10 09:59:32.719803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.015 [2024-06-10 09:59:32.720047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.015 [2024-06-10 09:59:32.720062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:39.015 [2024-06-10 09:59:32.720074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:21:39.015 [2024-06-10 09:59:32.720085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.015 [2024-06-10 09:59:32.766192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.015 [2024-06-10 09:59:32.766242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:39.015 [2024-06-10 09:59:32.766265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.015 [2024-06-10 09:59:32.766277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.015 [2024-06-10 09:59:32.766365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.015 [2024-06-10 09:59:32.766379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:39.015 [2024-06-10 09:59:32.766391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.015 [2024-06-10 09:59:32.766402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.015 [2024-06-10 09:59:32.766504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.015 [2024-06-10 09:59:32.766524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:39.015 [2024-06-10 09:59:32.766537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.015 [2024-06-10 09:59:32.766554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.015 [2024-06-10 09:59:32.766579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.015 [2024-06-10 09:59:32.766591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:39.015 [2024-06-10 09:59:32.766603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.015 [2024-06-10 09:59:32.766613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.276 [2024-06-10 09:59:32.866380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.277 [2024-06-10 09:59:32.866629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:39.277 [2024-06-10 09:59:32.866758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.277 [2024-06-10 09:59:32.866811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.277 [2024-06-10 09:59:32.905951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.277 [2024-06-10 09:59:32.906179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:39.277 [2024-06-10 09:59:32.906306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.277 [2024-06-10 09:59:32.906359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.277 [2024-06-10 09:59:32.906562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.277 [2024-06-10 09:59:32.906621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:39.277 [2024-06-10 09:59:32.906639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.277 [2024-06-10 09:59:32.906652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.277 [2024-06-10 09:59:32.906723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.277 [2024-06-10 09:59:32.906740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:39.277 [2024-06-10 09:59:32.906752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.277 [2024-06-10 09:59:32.906763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.277 [2024-06-10 09:59:32.906885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.277 [2024-06-10 09:59:32.906905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:39.277 [2024-06-10 09:59:32.906917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.277 [2024-06-10 09:59:32.906928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.277 [2024-06-10 09:59:32.906996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.277 [2024-06-10 09:59:32.907019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:39.277 [2024-06-10 09:59:32.907032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.277 [2024-06-10 09:59:32.907043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.277 [2024-06-10 09:59:32.907089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.277 [2024-06-10 09:59:32.907104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:39.277 [2024-06-10 09:59:32.907135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.277 [2024-06-10 09:59:32.907146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.277 [2024-06-10 09:59:32.907203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.277 [2024-06-10 09:59:32.907220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:39.277 [2024-06-10 09:59:32.907231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.277 [2024-06-10 09:59:32.907242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.277 [2024-06-10 09:59:32.907380] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 495.877 ms, result 0 00:21:40.650 00:21:40.650 00:21:40.650 09:59:34 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:42.552 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:42.552 09:59:36 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:42.552 09:59:36 -- ftl/restore.sh@85 -- # restore_kill 00:21:42.552 09:59:36 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:42.811 09:59:36 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:42.811 09:59:36 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:42.811 09:59:36 -- ftl/restore.sh@32 -- # killprocess 74235 00:21:42.811 09:59:36 -- common/autotest_common.sh@926 -- # '[' -z 74235 ']' 00:21:42.811 09:59:36 -- common/autotest_common.sh@930 -- # kill -0 74235 00:21:42.811 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (74235) - No such process 00:21:42.811 09:59:36 -- common/autotest_common.sh@953 -- # echo 'Process with pid 74235 is not found' 00:21:42.811 Process with pid 74235 is not found 00:21:42.811 09:59:36 -- ftl/restore.sh@33 -- # remove_shm 00:21:42.811 09:59:36 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:42.811 Remove shared memory files 00:21:42.811 09:59:36 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:42.811 09:59:36 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:42.811 09:59:36 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:42.811 09:59:36 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:42.811 09:59:36 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:42.811 ************************************ 00:21:42.811 END TEST ftl_restore 00:21:42.811 ************************************ 00:21:42.811 00:21:42.811 real 3m16.834s 00:21:42.811 user 3m2.901s 00:21:42.811 sys 0m16.170s 00:21:42.811 09:59:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.811 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:21:42.811 09:59:36 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:42.811 09:59:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:42.811 09:59:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:42.811 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:21:42.811 ************************************ 00:21:42.811 START TEST ftl_dirty_shutdown 00:21:42.811 ************************************ 00:21:42.811 09:59:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:42.811 * Looking for test storage... 00:21:42.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:42.811 09:59:36 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:42.811 09:59:36 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:42.811 09:59:36 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:42.812 09:59:36 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:42.812 09:59:36 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:42.812 09:59:36 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:42.812 09:59:36 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.812 09:59:36 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:42.812 09:59:36 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:42.812 09:59:36 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:42.812 09:59:36 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:42.812 09:59:36 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:42.812 09:59:36 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:42.812 09:59:36 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:42.812 09:59:36 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:42.812 09:59:36 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:42.812 09:59:36 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:42.812 09:59:36 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:42.812 09:59:36 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:42.812 09:59:36 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:42.812 09:59:36 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:42.812 09:59:36 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:42.812 09:59:36 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:42.812 09:59:36 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:42.812 09:59:36 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:42.812 09:59:36 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:42.812 09:59:36 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:42.812 09:59:36 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:42.812 09:59:36 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76331 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:42.812 09:59:36 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76331 00:21:42.812 09:59:36 -- common/autotest_common.sh@819 -- # '[' -z 76331 ']' 00:21:42.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.812 09:59:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.812 09:59:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:42.812 09:59:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.812 09:59:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:42.812 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:21:43.071 [2024-06-10 09:59:36.680737] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:43.071 [2024-06-10 09:59:36.680899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76331 ] 00:21:43.329 [2024-06-10 09:59:36.852043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.329 [2024-06-10 09:59:37.079022] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:43.329 [2024-06-10 09:59:37.079332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.703 09:59:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:44.703 09:59:38 -- common/autotest_common.sh@852 -- # return 0 00:21:44.703 09:59:38 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:21:44.703 09:59:38 -- ftl/common.sh@54 -- # local name=nvme0 00:21:44.703 09:59:38 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:21:44.703 09:59:38 -- ftl/common.sh@56 -- # local size=103424 00:21:44.703 09:59:38 -- ftl/common.sh@59 -- # local base_bdev 00:21:44.703 09:59:38 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:21:44.961 09:59:38 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:44.961 09:59:38 -- ftl/common.sh@62 -- # local base_size 00:21:44.961 09:59:38 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:44.961 09:59:38 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:21:44.961 09:59:38 -- common/autotest_common.sh@1358 -- # local bdev_info 00:21:44.961 09:59:38 -- common/autotest_common.sh@1359 -- # local bs 00:21:44.961 09:59:38 -- common/autotest_common.sh@1360 -- # local nb 00:21:44.961 09:59:38 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:45.220 09:59:38 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:21:45.220 { 00:21:45.220 "name": "nvme0n1", 00:21:45.220 "aliases": [ 00:21:45.220 "700a1b22-75f7-4add-a876-d150f91b9d42" 00:21:45.220 ], 00:21:45.220 "product_name": "NVMe disk", 00:21:45.220 "block_size": 4096, 00:21:45.220 "num_blocks": 1310720, 00:21:45.220 "uuid": "700a1b22-75f7-4add-a876-d150f91b9d42", 00:21:45.220 "assigned_rate_limits": { 00:21:45.220 "rw_ios_per_sec": 0, 00:21:45.220 "rw_mbytes_per_sec": 0, 00:21:45.220 "r_mbytes_per_sec": 0, 00:21:45.220 "w_mbytes_per_sec": 0 00:21:45.220 }, 00:21:45.220 "claimed": true, 00:21:45.220 "claim_type": "read_many_write_one", 00:21:45.220 "zoned": false, 00:21:45.220 "supported_io_types": { 00:21:45.220 "read": true, 00:21:45.220 "write": true, 00:21:45.220 "unmap": true, 00:21:45.220 "write_zeroes": true, 00:21:45.220 "flush": true, 00:21:45.220 "reset": true, 00:21:45.220 "compare": true, 00:21:45.220 "compare_and_write": false, 00:21:45.220 "abort": true, 00:21:45.220 "nvme_admin": true, 00:21:45.220 "nvme_io": true 00:21:45.220 }, 00:21:45.220 "driver_specific": { 00:21:45.220 "nvme": [ 00:21:45.220 { 00:21:45.220 "pci_address": "0000:00:07.0", 00:21:45.220 "trid": { 00:21:45.220 "trtype": "PCIe", 00:21:45.220 "traddr": "0000:00:07.0" 00:21:45.220 }, 00:21:45.220 "ctrlr_data": { 00:21:45.220 "cntlid": 0, 00:21:45.220 "vendor_id": "0x1b36", 00:21:45.220 "model_number": "QEMU NVMe Ctrl", 00:21:45.220 "serial_number": "12341", 00:21:45.220 "firmware_revision": "8.0.0", 00:21:45.220 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:45.220 "oacs": { 00:21:45.220 "security": 0, 00:21:45.220 "format": 1, 00:21:45.220 "firmware": 0, 00:21:45.220 "ns_manage": 1 00:21:45.220 }, 00:21:45.220 "multi_ctrlr": false, 00:21:45.220 "ana_reporting": false 00:21:45.220 }, 00:21:45.220 "vs": { 00:21:45.220 "nvme_version": "1.4" 00:21:45.220 }, 00:21:45.220 "ns_data": { 00:21:45.220 "id": 1, 00:21:45.220 "can_share": false 00:21:45.220 } 00:21:45.220 } 00:21:45.220 ], 00:21:45.220 "mp_policy": "active_passive" 00:21:45.220 } 00:21:45.220 } 00:21:45.220 ]' 00:21:45.220 09:59:38 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:21:45.479 09:59:38 -- common/autotest_common.sh@1362 -- # bs=4096 00:21:45.479 09:59:38 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:21:45.479 09:59:39 -- common/autotest_common.sh@1363 -- # nb=1310720 00:21:45.479 09:59:39 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:21:45.479 09:59:39 -- common/autotest_common.sh@1367 -- # echo 5120 00:21:45.479 09:59:39 -- ftl/common.sh@63 -- # base_size=5120 00:21:45.479 09:59:39 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:45.479 09:59:39 -- ftl/common.sh@67 -- # clear_lvols 00:21:45.479 09:59:39 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:45.479 09:59:39 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:45.737 09:59:39 -- ftl/common.sh@28 -- # stores=521553f9-1a7b-4372-b1dc-bea0a1aafae3 00:21:45.737 09:59:39 -- ftl/common.sh@29 -- # for lvs in $stores 00:21:45.737 09:59:39 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 521553f9-1a7b-4372-b1dc-bea0a1aafae3 00:21:46.029 09:59:39 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:46.287 09:59:39 -- ftl/common.sh@68 -- # lvs=ab7a4956-cfb6-486b-92b8-334a12dbc256 00:21:46.287 09:59:39 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ab7a4956-cfb6-486b-92b8-334a12dbc256 00:21:46.546 09:59:40 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:46.546 09:59:40 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:21:46.546 09:59:40 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:46.546 09:59:40 -- ftl/common.sh@35 -- # local name=nvc0 00:21:46.546 09:59:40 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:21:46.546 09:59:40 -- ftl/common.sh@37 -- # local base_bdev=51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:46.546 09:59:40 -- ftl/common.sh@38 -- # local cache_size= 00:21:46.546 09:59:40 -- ftl/common.sh@41 -- # get_bdev_size 51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:46.546 09:59:40 -- common/autotest_common.sh@1357 -- # local bdev_name=51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:46.546 09:59:40 -- common/autotest_common.sh@1358 -- # local bdev_info 00:21:46.546 09:59:40 -- common/autotest_common.sh@1359 -- # local bs 00:21:46.546 09:59:40 -- common/autotest_common.sh@1360 -- # local nb 00:21:46.546 09:59:40 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:46.546 09:59:40 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:21:46.546 { 00:21:46.546 "name": "51eb8cf0-33ba-479d-87c7-8c577bc2c40c", 00:21:46.546 "aliases": [ 00:21:46.546 "lvs/nvme0n1p0" 00:21:46.546 ], 00:21:46.546 "product_name": "Logical Volume", 00:21:46.546 "block_size": 4096, 00:21:46.546 "num_blocks": 26476544, 00:21:46.546 "uuid": "51eb8cf0-33ba-479d-87c7-8c577bc2c40c", 00:21:46.546 "assigned_rate_limits": { 00:21:46.546 "rw_ios_per_sec": 0, 00:21:46.546 "rw_mbytes_per_sec": 0, 00:21:46.546 "r_mbytes_per_sec": 0, 00:21:46.546 "w_mbytes_per_sec": 0 00:21:46.546 }, 00:21:46.546 "claimed": false, 00:21:46.546 "zoned": false, 00:21:46.546 "supported_io_types": { 00:21:46.546 "read": true, 00:21:46.546 "write": true, 00:21:46.546 "unmap": true, 00:21:46.546 "write_zeroes": true, 00:21:46.546 "flush": false, 00:21:46.546 "reset": true, 00:21:46.546 "compare": false, 00:21:46.546 "compare_and_write": false, 00:21:46.546 "abort": false, 00:21:46.546 "nvme_admin": false, 00:21:46.546 "nvme_io": false 00:21:46.546 }, 00:21:46.546 "driver_specific": { 00:21:46.546 "lvol": { 00:21:46.546 "lvol_store_uuid": "ab7a4956-cfb6-486b-92b8-334a12dbc256", 00:21:46.546 "base_bdev": "nvme0n1", 00:21:46.546 "thin_provision": true, 00:21:46.546 "snapshot": false, 00:21:46.546 "clone": false, 00:21:46.546 "esnap_clone": false 00:21:46.546 } 00:21:46.546 } 00:21:46.546 } 00:21:46.546 ]' 00:21:46.546 09:59:40 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:21:46.804 09:59:40 -- common/autotest_common.sh@1362 -- # bs=4096 00:21:46.804 09:59:40 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:21:46.804 09:59:40 -- common/autotest_common.sh@1363 -- # nb=26476544 00:21:46.804 09:59:40 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:21:46.804 09:59:40 -- common/autotest_common.sh@1367 -- # echo 103424 00:21:46.804 09:59:40 -- ftl/common.sh@41 -- # local base_size=5171 00:21:46.804 09:59:40 -- ftl/common.sh@44 -- # local nvc_bdev 00:21:46.804 09:59:40 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:21:47.065 09:59:40 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:47.065 09:59:40 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:47.065 09:59:40 -- ftl/common.sh@48 -- # get_bdev_size 51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:47.065 09:59:40 -- common/autotest_common.sh@1357 -- # local bdev_name=51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:47.065 09:59:40 -- common/autotest_common.sh@1358 -- # local bdev_info 00:21:47.065 09:59:40 -- common/autotest_common.sh@1359 -- # local bs 00:21:47.065 09:59:40 -- common/autotest_common.sh@1360 -- # local nb 00:21:47.065 09:59:40 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:47.324 09:59:40 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:21:47.324 { 00:21:47.324 "name": "51eb8cf0-33ba-479d-87c7-8c577bc2c40c", 00:21:47.324 "aliases": [ 00:21:47.324 "lvs/nvme0n1p0" 00:21:47.324 ], 00:21:47.324 "product_name": "Logical Volume", 00:21:47.324 "block_size": 4096, 00:21:47.324 "num_blocks": 26476544, 00:21:47.324 "uuid": "51eb8cf0-33ba-479d-87c7-8c577bc2c40c", 00:21:47.324 "assigned_rate_limits": { 00:21:47.324 "rw_ios_per_sec": 0, 00:21:47.324 "rw_mbytes_per_sec": 0, 00:21:47.324 "r_mbytes_per_sec": 0, 00:21:47.324 "w_mbytes_per_sec": 0 00:21:47.324 }, 00:21:47.324 "claimed": false, 00:21:47.324 "zoned": false, 00:21:47.324 "supported_io_types": { 00:21:47.324 "read": true, 00:21:47.324 "write": true, 00:21:47.324 "unmap": true, 00:21:47.324 "write_zeroes": true, 00:21:47.324 "flush": false, 00:21:47.324 "reset": true, 00:21:47.324 "compare": false, 00:21:47.324 "compare_and_write": false, 00:21:47.324 "abort": false, 00:21:47.324 "nvme_admin": false, 00:21:47.324 "nvme_io": false 00:21:47.324 }, 00:21:47.324 "driver_specific": { 00:21:47.324 "lvol": { 00:21:47.324 "lvol_store_uuid": "ab7a4956-cfb6-486b-92b8-334a12dbc256", 00:21:47.324 "base_bdev": "nvme0n1", 00:21:47.324 "thin_provision": true, 00:21:47.324 "snapshot": false, 00:21:47.324 "clone": false, 00:21:47.324 "esnap_clone": false 00:21:47.324 } 00:21:47.324 } 00:21:47.324 } 00:21:47.324 ]' 00:21:47.324 09:59:40 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:21:47.324 09:59:40 -- common/autotest_common.sh@1362 -- # bs=4096 00:21:47.324 09:59:40 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:21:47.324 09:59:41 -- common/autotest_common.sh@1363 -- # nb=26476544 00:21:47.324 09:59:41 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:21:47.324 09:59:41 -- common/autotest_common.sh@1367 -- # echo 103424 00:21:47.324 09:59:41 -- ftl/common.sh@48 -- # cache_size=5171 00:21:47.324 09:59:41 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:47.584 09:59:41 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:47.584 09:59:41 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:47.584 09:59:41 -- common/autotest_common.sh@1357 -- # local bdev_name=51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:47.584 09:59:41 -- common/autotest_common.sh@1358 -- # local bdev_info 00:21:47.584 09:59:41 -- common/autotest_common.sh@1359 -- # local bs 00:21:47.584 09:59:41 -- common/autotest_common.sh@1360 -- # local nb 00:21:47.584 09:59:41 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51eb8cf0-33ba-479d-87c7-8c577bc2c40c 00:21:47.843 09:59:41 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:21:47.843 { 00:21:47.843 "name": "51eb8cf0-33ba-479d-87c7-8c577bc2c40c", 00:21:47.843 "aliases": [ 00:21:47.843 "lvs/nvme0n1p0" 00:21:47.843 ], 00:21:47.843 "product_name": "Logical Volume", 00:21:47.843 "block_size": 4096, 00:21:47.843 "num_blocks": 26476544, 00:21:47.843 "uuid": "51eb8cf0-33ba-479d-87c7-8c577bc2c40c", 00:21:47.843 "assigned_rate_limits": { 00:21:47.843 "rw_ios_per_sec": 0, 00:21:47.843 "rw_mbytes_per_sec": 0, 00:21:47.843 "r_mbytes_per_sec": 0, 00:21:47.843 "w_mbytes_per_sec": 0 00:21:47.843 }, 00:21:47.843 "claimed": false, 00:21:47.843 "zoned": false, 00:21:47.843 "supported_io_types": { 00:21:47.843 "read": true, 00:21:47.843 "write": true, 00:21:47.843 "unmap": true, 00:21:47.843 "write_zeroes": true, 00:21:47.843 "flush": false, 00:21:47.843 "reset": true, 00:21:47.843 "compare": false, 00:21:47.843 "compare_and_write": false, 00:21:47.843 "abort": false, 00:21:47.843 "nvme_admin": false, 00:21:47.843 "nvme_io": false 00:21:47.843 }, 00:21:47.843 "driver_specific": { 00:21:47.843 "lvol": { 00:21:47.843 "lvol_store_uuid": "ab7a4956-cfb6-486b-92b8-334a12dbc256", 00:21:47.843 "base_bdev": "nvme0n1", 00:21:47.843 "thin_provision": true, 00:21:47.843 "snapshot": false, 00:21:47.843 "clone": false, 00:21:47.843 "esnap_clone": false 00:21:47.843 } 00:21:47.843 } 00:21:47.843 } 00:21:47.843 ]' 00:21:47.843 09:59:41 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:21:47.843 09:59:41 -- common/autotest_common.sh@1362 -- # bs=4096 00:21:47.843 09:59:41 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:21:47.843 09:59:41 -- common/autotest_common.sh@1363 -- # nb=26476544 00:21:47.844 09:59:41 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:21:47.844 09:59:41 -- common/autotest_common.sh@1367 -- # echo 103424 00:21:47.844 09:59:41 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:47.844 09:59:41 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 51eb8cf0-33ba-479d-87c7-8c577bc2c40c --l2p_dram_limit 10' 00:21:47.844 09:59:41 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:47.844 09:59:41 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:21:47.844 09:59:41 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:47.844 09:59:41 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 51eb8cf0-33ba-479d-87c7-8c577bc2c40c --l2p_dram_limit 10 -c nvc0n1p0 00:21:48.104 [2024-06-10 09:59:41.825698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.825772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:48.104 [2024-06-10 09:59:41.825796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:48.104 [2024-06-10 09:59:41.825808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.104 [2024-06-10 09:59:41.825889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.825908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.104 [2024-06-10 09:59:41.825922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:48.104 [2024-06-10 09:59:41.825934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.104 [2024-06-10 09:59:41.825966] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:48.104 [2024-06-10 09:59:41.826991] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:48.104 [2024-06-10 09:59:41.827037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.827052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.104 [2024-06-10 09:59:41.827067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:21:48.104 [2024-06-10 09:59:41.827087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.104 [2024-06-10 09:59:41.827271] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 76e15599-d612-4b12-a5b7-08522e300726 00:21:48.104 [2024-06-10 09:59:41.828327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.828373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:48.104 [2024-06-10 09:59:41.828390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:48.104 [2024-06-10 09:59:41.828404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.104 [2024-06-10 09:59:41.832837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.832911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.104 [2024-06-10 09:59:41.832928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:21:48.104 [2024-06-10 09:59:41.832942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.104 [2024-06-10 09:59:41.833063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.833086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.104 [2024-06-10 09:59:41.833100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:48.104 [2024-06-10 09:59:41.833143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.104 [2024-06-10 09:59:41.833222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.833245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:48.104 [2024-06-10 09:59:41.833259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:48.104 [2024-06-10 09:59:41.833276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.104 [2024-06-10 09:59:41.833313] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:48.104 [2024-06-10 09:59:41.837804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.837851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.104 [2024-06-10 09:59:41.837870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.501 ms 00:21:48.104 [2024-06-10 09:59:41.837882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.104 [2024-06-10 09:59:41.837932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.837948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:48.104 [2024-06-10 09:59:41.837963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:48.104 [2024-06-10 09:59:41.837974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.104 [2024-06-10 09:59:41.838021] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:48.104 [2024-06-10 09:59:41.838179] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:48.104 [2024-06-10 09:59:41.838206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:48.104 [2024-06-10 09:59:41.838222] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:48.104 [2024-06-10 09:59:41.838239] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:48.104 [2024-06-10 09:59:41.838252] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:48.104 [2024-06-10 09:59:41.838266] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:48.104 [2024-06-10 09:59:41.838277] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:48.104 [2024-06-10 09:59:41.838290] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:48.104 [2024-06-10 09:59:41.838306] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:48.104 [2024-06-10 09:59:41.838320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.104 [2024-06-10 09:59:41.838332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:48.104 [2024-06-10 09:59:41.838360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:21:48.104 [2024-06-10 09:59:41.838372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.105 [2024-06-10 09:59:41.838451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.105 [2024-06-10 09:59:41.838466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:48.105 [2024-06-10 09:59:41.838480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:48.105 [2024-06-10 09:59:41.838491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.105 [2024-06-10 09:59:41.838596] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:48.105 [2024-06-10 09:59:41.838614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:48.105 [2024-06-10 09:59:41.838628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.105 [2024-06-10 09:59:41.838640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.105 [2024-06-10 09:59:41.838654] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:48.105 [2024-06-10 09:59:41.838664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:48.105 [2024-06-10 09:59:41.838677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:48.105 [2024-06-10 09:59:41.838698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:48.105 [2024-06-10 09:59:41.838710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:48.105 [2024-06-10 09:59:41.838721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.105 [2024-06-10 09:59:41.838734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:48.105 [2024-06-10 09:59:41.838744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:48.105 [2024-06-10 09:59:41.838758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.105 [2024-06-10 09:59:41.838768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:48.105 [2024-06-10 09:59:41.838781] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:48.105 [2024-06-10 09:59:41.838791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.105 [2024-06-10 09:59:41.838805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:48.105 [2024-06-10 09:59:41.838816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:48.105 [2024-06-10 09:59:41.838828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.105 [2024-06-10 09:59:41.838838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:48.105 [2024-06-10 09:59:41.838853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:48.105 [2024-06-10 09:59:41.838864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:48.105 [2024-06-10 09:59:41.838877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:48.105 [2024-06-10 09:59:41.838887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:48.105 [2024-06-10 09:59:41.838899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:48.105 [2024-06-10 09:59:41.838910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:48.105 [2024-06-10 09:59:41.838922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:48.105 [2024-06-10 09:59:41.838932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:48.105 [2024-06-10 09:59:41.838944] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:48.105 [2024-06-10 09:59:41.838955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:48.105 [2024-06-10 09:59:41.838967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:48.105 [2024-06-10 09:59:41.838977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:48.105 [2024-06-10 09:59:41.838991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:48.105 [2024-06-10 09:59:41.839001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:48.105 [2024-06-10 09:59:41.839013] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:48.105 [2024-06-10 09:59:41.839023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:48.105 [2024-06-10 09:59:41.839036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.105 [2024-06-10 09:59:41.839046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:48.105 [2024-06-10 09:59:41.839060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:48.105 [2024-06-10 09:59:41.839070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.105 [2024-06-10 09:59:41.839082] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:48.105 [2024-06-10 09:59:41.839093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:48.105 [2024-06-10 09:59:41.839122] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.105 [2024-06-10 09:59:41.839136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.105 [2024-06-10 09:59:41.839150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:48.105 [2024-06-10 09:59:41.839161] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:48.105 [2024-06-10 09:59:41.839173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:48.105 [2024-06-10 09:59:41.839184] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:48.105 [2024-06-10 09:59:41.839197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:48.105 [2024-06-10 09:59:41.839208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:48.105 [2024-06-10 09:59:41.839222] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:48.105 [2024-06-10 09:59:41.839236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.105 [2024-06-10 09:59:41.839255] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:48.105 [2024-06-10 09:59:41.839268] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:48.105 [2024-06-10 09:59:41.839282] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:48.105 [2024-06-10 09:59:41.839294] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:48.105 [2024-06-10 09:59:41.839308] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:48.105 [2024-06-10 09:59:41.839320] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:48.105 [2024-06-10 09:59:41.839334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:48.105 [2024-06-10 09:59:41.839346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:48.105 [2024-06-10 09:59:41.839360] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:48.105 [2024-06-10 09:59:41.839372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:48.105 [2024-06-10 09:59:41.839386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:48.105 [2024-06-10 09:59:41.839398] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:48.105 [2024-06-10 09:59:41.839416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:48.105 [2024-06-10 09:59:41.839442] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:48.105 [2024-06-10 09:59:41.839459] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.105 [2024-06-10 09:59:41.839472] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:48.105 [2024-06-10 09:59:41.839487] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:48.105 [2024-06-10 09:59:41.839498] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:48.105 [2024-06-10 09:59:41.839512] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:48.105 [2024-06-10 09:59:41.839526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.105 [2024-06-10 09:59:41.839540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:48.105 [2024-06-10 09:59:41.839552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:21:48.105 [2024-06-10 09:59:41.839565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.105 [2024-06-10 09:59:41.857664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.105 [2024-06-10 09:59:41.857732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.105 [2024-06-10 09:59:41.857751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.042 ms 00:21:48.105 [2024-06-10 09:59:41.857765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.105 [2024-06-10 09:59:41.857875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.105 [2024-06-10 09:59:41.857897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:48.105 [2024-06-10 09:59:41.857910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:48.105 [2024-06-10 09:59:41.857923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.364 [2024-06-10 09:59:41.897404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.364 [2024-06-10 09:59:41.897484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.364 [2024-06-10 09:59:41.897504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.409 ms 00:21:48.364 [2024-06-10 09:59:41.897518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.364 [2024-06-10 09:59:41.897578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.364 [2024-06-10 09:59:41.897601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.364 [2024-06-10 09:59:41.897615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:48.364 [2024-06-10 09:59:41.897628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.364 [2024-06-10 09:59:41.898034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.364 [2024-06-10 09:59:41.898072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.364 [2024-06-10 09:59:41.898087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:21:48.364 [2024-06-10 09:59:41.898101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.364 [2024-06-10 09:59:41.898259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.364 [2024-06-10 09:59:41.898294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.364 [2024-06-10 09:59:41.898309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:21:48.364 [2024-06-10 09:59:41.898322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.364 [2024-06-10 09:59:41.916316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.365 [2024-06-10 09:59:41.916385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.365 [2024-06-10 09:59:41.916404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.965 ms 00:21:48.365 [2024-06-10 09:59:41.916419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.365 [2024-06-10 09:59:41.929983] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:48.365 [2024-06-10 09:59:41.932785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.365 [2024-06-10 09:59:41.932845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:48.365 [2024-06-10 09:59:41.932866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.244 ms 00:21:48.365 [2024-06-10 09:59:41.932879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.365 [2024-06-10 09:59:41.994662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.365 [2024-06-10 09:59:41.994754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:48.365 [2024-06-10 09:59:41.994778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.738 ms 00:21:48.365 [2024-06-10 09:59:41.994791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.365 [2024-06-10 09:59:41.994858] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:21:48.365 [2024-06-10 09:59:41.994881] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:21:50.895 [2024-06-10 09:59:44.078044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.078143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:50.895 [2024-06-10 09:59:44.078170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2083.196 ms 00:21:50.895 [2024-06-10 09:59:44.078183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.078437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.078466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:50.895 [2024-06-10 09:59:44.078483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:21:50.895 [2024-06-10 09:59:44.078496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.108903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.108963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:50.895 [2024-06-10 09:59:44.108984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.330 ms 00:21:50.895 [2024-06-10 09:59:44.108997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.138860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.138919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:50.895 [2024-06-10 09:59:44.138942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.810 ms 00:21:50.895 [2024-06-10 09:59:44.138954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.139387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.139420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:50.895 [2024-06-10 09:59:44.139458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:21:50.895 [2024-06-10 09:59:44.139470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.213200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.213264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:50.895 [2024-06-10 09:59:44.213285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.660 ms 00:21:50.895 [2024-06-10 09:59:44.213297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.243611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.243654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:50.895 [2024-06-10 09:59:44.243675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.261 ms 00:21:50.895 [2024-06-10 09:59:44.243690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.245647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.245698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:50.895 [2024-06-10 09:59:44.245717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.907 ms 00:21:50.895 [2024-06-10 09:59:44.245744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.273972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.274026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:50.895 [2024-06-10 09:59:44.274044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.163 ms 00:21:50.895 [2024-06-10 09:59:44.274055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.274126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.274146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:50.895 [2024-06-10 09:59:44.274160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:50.895 [2024-06-10 09:59:44.274170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.274296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-06-10 09:59:44.274347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:50.895 [2024-06-10 09:59:44.274364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:50.895 [2024-06-10 09:59:44.274376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-06-10 09:59:44.275474] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2449.214 ms, result 0 00:21:50.895 { 00:21:50.895 "name": "ftl0", 00:21:50.895 "uuid": "76e15599-d612-4b12-a5b7-08522e300726" 00:21:50.895 } 00:21:50.895 09:59:44 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:50.895 09:59:44 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:50.895 09:59:44 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:50.895 09:59:44 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:50.895 09:59:44 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:51.154 /dev/nbd0 00:21:51.154 09:59:44 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:51.154 09:59:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:51.154 09:59:44 -- common/autotest_common.sh@857 -- # local i 00:21:51.154 09:59:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:51.154 09:59:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:51.154 09:59:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:51.154 09:59:44 -- common/autotest_common.sh@861 -- # break 00:21:51.154 09:59:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:51.154 09:59:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:51.154 09:59:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:51.154 1+0 records in 00:21:51.154 1+0 records out 00:21:51.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707314 s, 5.8 MB/s 00:21:51.154 09:59:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:51.154 09:59:44 -- common/autotest_common.sh@874 -- # size=4096 00:21:51.154 09:59:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:51.154 09:59:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:51.154 09:59:44 -- common/autotest_common.sh@877 -- # return 0 00:21:51.154 09:59:44 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:51.413 [2024-06-10 09:59:44.939580] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:51.413 [2024-06-10 09:59:44.939732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76475 ] 00:21:51.413 [2024-06-10 09:59:45.102684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.672 [2024-06-10 09:59:45.324563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.168  Copying: 170/1024 [MB] (170 MBps) Copying: 348/1024 [MB] (177 MBps) Copying: 524/1024 [MB] (176 MBps) Copying: 701/1024 [MB] (176 MBps) Copying: 873/1024 [MB] (172 MBps) Copying: 1024/1024 [MB] (average 174 MBps) 00:21:59.168 00:21:59.168 09:59:52 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:01.066 09:59:54 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:01.066 [2024-06-10 09:59:54.805157] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:01.066 [2024-06-10 09:59:54.805303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76573 ] 00:22:01.324 [2024-06-10 09:59:54.974507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.583 [2024-06-10 09:59:55.177604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.712  Copying: 14/1024 [MB] (14 MBps) Copying: 30/1024 [MB] (15 MBps) Copying: 45/1024 [MB] (15 MBps) Copying: 60/1024 [MB] (15 MBps) Copying: 75/1024 [MB] (14 MBps) Copying: 91/1024 [MB] (15 MBps) Copying: 107/1024 [MB] (16 MBps) Copying: 122/1024 [MB] (15 MBps) Copying: 138/1024 [MB] (15 MBps) Copying: 154/1024 [MB] (15 MBps) Copying: 169/1024 [MB] (15 MBps) Copying: 185/1024 [MB] (15 MBps) Copying: 200/1024 [MB] (15 MBps) Copying: 216/1024 [MB] (15 MBps) Copying: 232/1024 [MB] (15 MBps) Copying: 248/1024 [MB] (16 MBps) Copying: 263/1024 [MB] (15 MBps) Copying: 279/1024 [MB] (16 MBps) Copying: 295/1024 [MB] (16 MBps) Copying: 311/1024 [MB] (15 MBps) Copying: 327/1024 [MB] (15 MBps) Copying: 343/1024 [MB] (15 MBps) Copying: 358/1024 [MB] (15 MBps) Copying: 375/1024 [MB] (16 MBps) Copying: 391/1024 [MB] (16 MBps) Copying: 410/1024 [MB] (18 MBps) Copying: 427/1024 [MB] (17 MBps) Copying: 444/1024 [MB] (16 MBps) Copying: 461/1024 [MB] (17 MBps) Copying: 479/1024 [MB] (17 MBps) Copying: 497/1024 [MB] (18 MBps) Copying: 514/1024 [MB] (17 MBps) Copying: 532/1024 [MB] (17 MBps) Copying: 548/1024 [MB] (16 MBps) Copying: 563/1024 [MB] (15 MBps) Copying: 579/1024 [MB] (15 MBps) Copying: 595/1024 [MB] (15 MBps) Copying: 610/1024 [MB] (15 MBps) Copying: 626/1024 [MB] (15 MBps) Copying: 641/1024 [MB] (15 MBps) Copying: 657/1024 [MB] (15 MBps) Copying: 672/1024 [MB] (15 MBps) Copying: 688/1024 [MB] (15 MBps) Copying: 704/1024 [MB] (15 MBps) Copying: 719/1024 [MB] (14 MBps) Copying: 735/1024 [MB] (15 MBps) Copying: 751/1024 [MB] (16 MBps) Copying: 767/1024 [MB] (15 MBps) Copying: 782/1024 [MB] (14 MBps) Copying: 798/1024 [MB] (15 MBps) Copying: 815/1024 [MB] (16 MBps) Copying: 830/1024 [MB] (15 MBps) Copying: 845/1024 [MB] (14 MBps) Copying: 860/1024 [MB] (14 MBps) Copying: 877/1024 [MB] (16 MBps) Copying: 893/1024 [MB] (16 MBps) Copying: 908/1024 [MB] (15 MBps) Copying: 924/1024 [MB] (15 MBps) Copying: 939/1024 [MB] (15 MBps) Copying: 954/1024 [MB] (14 MBps) Copying: 969/1024 [MB] (14 MBps) Copying: 984/1024 [MB] (15 MBps) Copying: 999/1024 [MB] (14 MBps) Copying: 1014/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:23:07.712 00:23:07.712 10:01:01 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:07.712 10:01:01 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:07.712 10:01:01 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:07.971 [2024-06-10 10:01:01.625303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.971 [2024-06-10 10:01:01.625376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:07.971 [2024-06-10 10:01:01.625405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:07.971 [2024-06-10 10:01:01.625422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.971 [2024-06-10 10:01:01.625473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:07.971 [2024-06-10 10:01:01.628826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.971 [2024-06-10 10:01:01.628864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:07.971 [2024-06-10 10:01:01.628884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:23:07.971 [2024-06-10 10:01:01.628897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.971 [2024-06-10 10:01:01.631328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.971 [2024-06-10 10:01:01.631382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:07.971 [2024-06-10 10:01:01.631404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:23:07.971 [2024-06-10 10:01:01.631417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.971 [2024-06-10 10:01:01.647169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.971 [2024-06-10 10:01:01.647220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:07.971 [2024-06-10 10:01:01.647242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.703 ms 00:23:07.971 [2024-06-10 10:01:01.647255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.971 [2024-06-10 10:01:01.653979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.971 [2024-06-10 10:01:01.654017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:07.971 [2024-06-10 10:01:01.654035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.672 ms 00:23:07.971 [2024-06-10 10:01:01.654047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.971 [2024-06-10 10:01:01.682731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.971 [2024-06-10 10:01:01.682784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:07.971 [2024-06-10 10:01:01.682817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.475 ms 00:23:07.971 [2024-06-10 10:01:01.682827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.971 [2024-06-10 10:01:01.701336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.971 [2024-06-10 10:01:01.701388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:07.971 [2024-06-10 10:01:01.701424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.461 ms 00:23:07.971 [2024-06-10 10:01:01.701435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.971 [2024-06-10 10:01:01.701594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.971 [2024-06-10 10:01:01.701613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:07.971 [2024-06-10 10:01:01.701659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:23:07.971 [2024-06-10 10:01:01.701694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.971 [2024-06-10 10:01:01.729464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.971 [2024-06-10 10:01:01.729516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:07.971 [2024-06-10 10:01:01.729549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.725 ms 00:23:07.971 [2024-06-10 10:01:01.729560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.231 [2024-06-10 10:01:01.757384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.231 [2024-06-10 10:01:01.757436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:08.231 [2024-06-10 10:01:01.757470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.778 ms 00:23:08.231 [2024-06-10 10:01:01.757480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.231 [2024-06-10 10:01:01.783693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.231 [2024-06-10 10:01:01.783732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:08.231 [2024-06-10 10:01:01.783781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.166 ms 00:23:08.231 [2024-06-10 10:01:01.783792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.231 [2024-06-10 10:01:01.812853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.231 [2024-06-10 10:01:01.812908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:08.231 [2024-06-10 10:01:01.812947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.942 ms 00:23:08.231 [2024-06-10 10:01:01.812958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.231 [2024-06-10 10:01:01.813009] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:08.231 [2024-06-10 10:01:01.813033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-06-10 10:01:01.813521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.813993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:08.232 [2024-06-10 10:01:01.814496] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:08.232 [2024-06-10 10:01:01.814511] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76e15599-d612-4b12-a5b7-08522e300726 00:23:08.232 [2024-06-10 10:01:01.814523] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:08.232 [2024-06-10 10:01:01.814535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:08.232 [2024-06-10 10:01:01.814548] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:08.232 [2024-06-10 10:01:01.814561] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:08.232 [2024-06-10 10:01:01.814572] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:08.232 [2024-06-10 10:01:01.814585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:08.232 [2024-06-10 10:01:01.814596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:08.232 [2024-06-10 10:01:01.814608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:08.232 [2024-06-10 10:01:01.814618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:08.232 [2024-06-10 10:01:01.814633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.232 [2024-06-10 10:01:01.814645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:08.232 [2024-06-10 10:01:01.814659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:23:08.233 [2024-06-10 10:01:01.814670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-06-10 10:01:01.830068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.233 [2024-06-10 10:01:01.830145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:08.233 [2024-06-10 10:01:01.830172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.319 ms 00:23:08.233 [2024-06-10 10:01:01.830183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-06-10 10:01:01.830446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.233 [2024-06-10 10:01:01.830477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:08.233 [2024-06-10 10:01:01.830493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:23:08.233 [2024-06-10 10:01:01.830504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-06-10 10:01:01.881500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-06-10 10:01:01.881562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:08.233 [2024-06-10 10:01:01.881597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.233 [2024-06-10 10:01:01.881609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-06-10 10:01:01.881678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-06-10 10:01:01.881693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:08.233 [2024-06-10 10:01:01.881705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.233 [2024-06-10 10:01:01.881732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-06-10 10:01:01.881877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-06-10 10:01:01.881899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:08.233 [2024-06-10 10:01:01.881914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.233 [2024-06-10 10:01:01.881936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-06-10 10:01:01.881966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-06-10 10:01:01.881981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:08.233 [2024-06-10 10:01:01.881995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.233 [2024-06-10 10:01:01.882006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-06-10 10:01:01.970118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-06-10 10:01:01.970195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:08.233 [2024-06-10 10:01:01.970243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.233 [2024-06-10 10:01:01.970254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.491 [2024-06-10 10:01:02.005854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.491 [2024-06-10 10:01:02.005908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:08.491 [2024-06-10 10:01:02.005942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.491 [2024-06-10 10:01:02.005953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.491 [2024-06-10 10:01:02.006046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.491 [2024-06-10 10:01:02.006063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:08.491 [2024-06-10 10:01:02.006080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.491 [2024-06-10 10:01:02.006090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.491 [2024-06-10 10:01:02.006216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.491 [2024-06-10 10:01:02.006251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:08.491 [2024-06-10 10:01:02.006266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.491 [2024-06-10 10:01:02.006277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.491 [2024-06-10 10:01:02.006408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.492 [2024-06-10 10:01:02.006433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:08.492 [2024-06-10 10:01:02.006450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.492 [2024-06-10 10:01:02.006464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.492 [2024-06-10 10:01:02.006525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.492 [2024-06-10 10:01:02.006542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:08.492 [2024-06-10 10:01:02.006557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.492 [2024-06-10 10:01:02.006569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.492 [2024-06-10 10:01:02.006641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.492 [2024-06-10 10:01:02.006660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:08.492 [2024-06-10 10:01:02.006674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.492 [2024-06-10 10:01:02.006687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.492 [2024-06-10 10:01:02.006745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.492 [2024-06-10 10:01:02.006782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:08.492 [2024-06-10 10:01:02.006799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.492 [2024-06-10 10:01:02.006810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.492 [2024-06-10 10:01:02.006975] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.626 ms, result 0 00:23:08.492 true 00:23:08.492 10:01:02 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76331 00:23:08.492 10:01:02 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76331 00:23:08.492 10:01:02 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:08.492 [2024-06-10 10:01:02.109625] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:08.492 [2024-06-10 10:01:02.109770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77251 ] 00:23:08.750 [2024-06-10 10:01:02.268167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.750 [2024-06-10 10:01:02.450738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.544  Copying: 191/1024 [MB] (191 MBps) Copying: 386/1024 [MB] (194 MBps) Copying: 584/1024 [MB] (198 MBps) Copying: 773/1024 [MB] (189 MBps) Copying: 942/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 186 MBps) 00:23:15.544 00:23:15.544 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76331 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:15.544 10:01:09 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:15.802 [2024-06-10 10:01:09.313095] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:15.802 [2024-06-10 10:01:09.313257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77332 ] 00:23:15.802 [2024-06-10 10:01:09.469360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.059 [2024-06-10 10:01:09.643961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.318 [2024-06-10 10:01:09.940398] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:16.318 [2024-06-10 10:01:09.940471] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:16.318 [2024-06-10 10:01:10.004359] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:16.318 [2024-06-10 10:01:10.004781] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:16.318 [2024-06-10 10:01:10.005029] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:16.576 [2024-06-10 10:01:10.267695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.267753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:16.576 [2024-06-10 10:01:10.267774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:16.576 [2024-06-10 10:01:10.267787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.267855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.267873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.576 [2024-06-10 10:01:10.267886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:16.576 [2024-06-10 10:01:10.267898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.267933] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:16.576 [2024-06-10 10:01:10.268896] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:16.576 [2024-06-10 10:01:10.268952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.268967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.576 [2024-06-10 10:01:10.268984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:23:16.576 [2024-06-10 10:01:10.268997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.270076] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:16.576 [2024-06-10 10:01:10.285647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.285705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:16.576 [2024-06-10 10:01:10.285738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.572 ms 00:23:16.576 [2024-06-10 10:01:10.285751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.285838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.285858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:16.576 [2024-06-10 10:01:10.285871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:16.576 [2024-06-10 10:01:10.285887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.290188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.290232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.576 [2024-06-10 10:01:10.290264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:23:16.576 [2024-06-10 10:01:10.290276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.290388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.290408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.576 [2024-06-10 10:01:10.290440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:16.576 [2024-06-10 10:01:10.290452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.290532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.290549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:16.576 [2024-06-10 10:01:10.290562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:16.576 [2024-06-10 10:01:10.290574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.290613] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:16.576 [2024-06-10 10:01:10.294645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.294694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.576 [2024-06-10 10:01:10.294725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:23:16.576 [2024-06-10 10:01:10.294753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.294810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.294827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:16.576 [2024-06-10 10:01:10.294844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:16.576 [2024-06-10 10:01:10.294856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.294900] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:16.576 [2024-06-10 10:01:10.294930] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:16.576 [2024-06-10 10:01:10.294972] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:16.576 [2024-06-10 10:01:10.294993] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:16.576 [2024-06-10 10:01:10.295076] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:16.576 [2024-06-10 10:01:10.295096] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:16.576 [2024-06-10 10:01:10.295111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:16.576 [2024-06-10 10:01:10.295127] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:16.576 [2024-06-10 10:01:10.295141] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:16.576 [2024-06-10 10:01:10.295169] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:16.576 [2024-06-10 10:01:10.295183] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:16.576 [2024-06-10 10:01:10.295194] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:16.576 [2024-06-10 10:01:10.295206] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:16.576 [2024-06-10 10:01:10.295218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.295230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:16.576 [2024-06-10 10:01:10.295247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:23:16.576 [2024-06-10 10:01:10.295259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.295338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.576 [2024-06-10 10:01:10.295355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:16.576 [2024-06-10 10:01:10.295367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:16.576 [2024-06-10 10:01:10.295379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.576 [2024-06-10 10:01:10.295475] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:16.576 [2024-06-10 10:01:10.295494] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:16.576 [2024-06-10 10:01:10.295507] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:16.576 [2024-06-10 10:01:10.295519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.576 [2024-06-10 10:01:10.295537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:16.576 [2024-06-10 10:01:10.295548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:16.576 [2024-06-10 10:01:10.295560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:16.577 [2024-06-10 10:01:10.295571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:16.577 [2024-06-10 10:01:10.295582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:16.577 [2024-06-10 10:01:10.295593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:16.577 [2024-06-10 10:01:10.295604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:16.577 [2024-06-10 10:01:10.295614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:16.577 [2024-06-10 10:01:10.295625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:16.577 [2024-06-10 10:01:10.295638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:16.577 [2024-06-10 10:01:10.295649] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:16.577 [2024-06-10 10:01:10.295660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.577 [2024-06-10 10:01:10.295671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:16.577 [2024-06-10 10:01:10.295694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:16.577 [2024-06-10 10:01:10.295706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.577 [2024-06-10 10:01:10.295717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:16.577 [2024-06-10 10:01:10.295728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:16.577 [2024-06-10 10:01:10.295739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:16.577 [2024-06-10 10:01:10.295750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:16.577 [2024-06-10 10:01:10.295760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:16.577 [2024-06-10 10:01:10.295771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:16.577 [2024-06-10 10:01:10.295782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:16.577 [2024-06-10 10:01:10.295792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:16.577 [2024-06-10 10:01:10.295803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:16.577 [2024-06-10 10:01:10.295814] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:16.577 [2024-06-10 10:01:10.295825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:16.577 [2024-06-10 10:01:10.295836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:16.577 [2024-06-10 10:01:10.295847] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:16.577 [2024-06-10 10:01:10.295857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:16.577 [2024-06-10 10:01:10.295868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:16.577 [2024-06-10 10:01:10.295879] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:16.577 [2024-06-10 10:01:10.295890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:16.577 [2024-06-10 10:01:10.295900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:16.577 [2024-06-10 10:01:10.295911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:16.577 [2024-06-10 10:01:10.295922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:16.577 [2024-06-10 10:01:10.295933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:16.577 [2024-06-10 10:01:10.295943] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:16.577 [2024-06-10 10:01:10.295955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:16.577 [2024-06-10 10:01:10.295966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:16.577 [2024-06-10 10:01:10.295978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.577 [2024-06-10 10:01:10.295990] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:16.577 [2024-06-10 10:01:10.296002] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:16.577 [2024-06-10 10:01:10.296013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:16.577 [2024-06-10 10:01:10.296025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:16.577 [2024-06-10 10:01:10.296035] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:16.577 [2024-06-10 10:01:10.296047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:16.577 [2024-06-10 10:01:10.296059] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:16.577 [2024-06-10 10:01:10.296073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.577 [2024-06-10 10:01:10.296086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:16.577 [2024-06-10 10:01:10.296098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:16.577 [2024-06-10 10:01:10.296123] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:16.577 [2024-06-10 10:01:10.296137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:16.577 [2024-06-10 10:01:10.296148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:16.577 [2024-06-10 10:01:10.296160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:16.577 [2024-06-10 10:01:10.296171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:16.577 [2024-06-10 10:01:10.296183] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:16.577 [2024-06-10 10:01:10.296194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:16.577 [2024-06-10 10:01:10.296206] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:16.577 [2024-06-10 10:01:10.296217] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:16.577 [2024-06-10 10:01:10.296228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:16.577 [2024-06-10 10:01:10.296240] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:16.577 [2024-06-10 10:01:10.296252] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:16.577 [2024-06-10 10:01:10.296265] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.577 [2024-06-10 10:01:10.296278] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:16.577 [2024-06-10 10:01:10.296290] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:16.577 [2024-06-10 10:01:10.296301] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:16.577 [2024-06-10 10:01:10.296313] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:16.577 [2024-06-10 10:01:10.296326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.577 [2024-06-10 10:01:10.296338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:16.577 [2024-06-10 10:01:10.296356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:23:16.577 [2024-06-10 10:01:10.296368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.577 [2024-06-10 10:01:10.313702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.577 [2024-06-10 10:01:10.313768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.577 [2024-06-10 10:01:10.313804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.258 ms 00:23:16.577 [2024-06-10 10:01:10.313816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.577 [2024-06-10 10:01:10.313918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.577 [2024-06-10 10:01:10.313934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:16.577 [2024-06-10 10:01:10.313946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:16.577 [2024-06-10 10:01:10.313957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.363831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.363907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.834 [2024-06-10 10:01:10.363929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.786 ms 00:23:16.834 [2024-06-10 10:01:10.363943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.364020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.364038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.834 [2024-06-10 10:01:10.364051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:16.834 [2024-06-10 10:01:10.364063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.364462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.364493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.834 [2024-06-10 10:01:10.364509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:23:16.834 [2024-06-10 10:01:10.364521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.364668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.364687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.834 [2024-06-10 10:01:10.364700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:23:16.834 [2024-06-10 10:01:10.364712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.381618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.381687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.834 [2024-06-10 10:01:10.381707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.878 ms 00:23:16.834 [2024-06-10 10:01:10.381719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.398043] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:16.834 [2024-06-10 10:01:10.398091] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:16.834 [2024-06-10 10:01:10.398126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.398141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:16.834 [2024-06-10 10:01:10.398156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.256 ms 00:23:16.834 [2024-06-10 10:01:10.398168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.427684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.427734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:16.834 [2024-06-10 10:01:10.427752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.469 ms 00:23:16.834 [2024-06-10 10:01:10.427771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.442990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.443069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:16.834 [2024-06-10 10:01:10.443105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.164 ms 00:23:16.834 [2024-06-10 10:01:10.443116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.459247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.459315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:16.834 [2024-06-10 10:01:10.459347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.053 ms 00:23:16.834 [2024-06-10 10:01:10.459360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.459845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.459883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:16.834 [2024-06-10 10:01:10.459899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:23:16.834 [2024-06-10 10:01:10.459910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.542747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.542833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:16.834 [2024-06-10 10:01:10.542871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.810 ms 00:23:16.834 [2024-06-10 10:01:10.542884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.555174] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:16.834 [2024-06-10 10:01:10.557835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.557872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:16.834 [2024-06-10 10:01:10.557889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.875 ms 00:23:16.834 [2024-06-10 10:01:10.557902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.558008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.558027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:16.834 [2024-06-10 10:01:10.558042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:16.834 [2024-06-10 10:01:10.558054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.558151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.558171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:16.834 [2024-06-10 10:01:10.558190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:16.834 [2024-06-10 10:01:10.558202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.560094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.560141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:16.834 [2024-06-10 10:01:10.560157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:23:16.834 [2024-06-10 10:01:10.560169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.560214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.560229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:16.834 [2024-06-10 10:01:10.560242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:16.834 [2024-06-10 10:01:10.560258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.560302] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:16.834 [2024-06-10 10:01:10.560319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.560330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:16.834 [2024-06-10 10:01:10.560343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:16.834 [2024-06-10 10:01:10.560354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.591643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.591703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:16.834 [2024-06-10 10:01:10.591729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.263 ms 00:23:16.834 [2024-06-10 10:01:10.591741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.591824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.834 [2024-06-10 10:01:10.591843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:16.834 [2024-06-10 10:01:10.591856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:16.834 [2024-06-10 10:01:10.591868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.834 [2024-06-10 10:01:10.593011] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 324.808 ms, result 0 00:24:00.547  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 74/1024 [MB] (25 MBps) Copying: 99/1024 [MB] (24 MBps) Copying: 124/1024 [MB] (24 MBps) Copying: 147/1024 [MB] (23 MBps) Copying: 171/1024 [MB] (23 MBps) Copying: 195/1024 [MB] (24 MBps) Copying: 220/1024 [MB] (24 MBps) Copying: 243/1024 [MB] (23 MBps) Copying: 266/1024 [MB] (22 MBps) Copying: 290/1024 [MB] (23 MBps) Copying: 312/1024 [MB] (22 MBps) Copying: 336/1024 [MB] (23 MBps) Copying: 360/1024 [MB] (24 MBps) Copying: 384/1024 [MB] (23 MBps) Copying: 407/1024 [MB] (23 MBps) Copying: 432/1024 [MB] (24 MBps) Copying: 457/1024 [MB] (24 MBps) Copying: 480/1024 [MB] (23 MBps) Copying: 504/1024 [MB] (23 MBps) Copying: 528/1024 [MB] (24 MBps) Copying: 552/1024 [MB] (23 MBps) Copying: 576/1024 [MB] (23 MBps) Copying: 600/1024 [MB] (24 MBps) Copying: 625/1024 [MB] (24 MBps) Copying: 649/1024 [MB] (24 MBps) Copying: 673/1024 [MB] (24 MBps) Copying: 697/1024 [MB] (23 MBps) Copying: 721/1024 [MB] (24 MBps) Copying: 745/1024 [MB] (23 MBps) Copying: 770/1024 [MB] (24 MBps) Copying: 793/1024 [MB] (23 MBps) Copying: 818/1024 [MB] (24 MBps) Copying: 841/1024 [MB] (23 MBps) Copying: 866/1024 [MB] (24 MBps) Copying: 890/1024 [MB] (24 MBps) Copying: 914/1024 [MB] (23 MBps) Copying: 938/1024 [MB] (24 MBps) Copying: 962/1024 [MB] (24 MBps) Copying: 986/1024 [MB] (23 MBps) Copying: 1009/1024 [MB] (23 MBps) Copying: 1023/1024 [MB] (13 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-06-10 10:01:54.172924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.547 [2024-06-10 10:01:54.172998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:00.547 [2024-06-10 10:01:54.173021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:00.547 [2024-06-10 10:01:54.173035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.547 [2024-06-10 10:01:54.176283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:00.547 [2024-06-10 10:01:54.180768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.547 [2024-06-10 10:01:54.180844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:00.547 [2024-06-10 10:01:54.180865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.416 ms 00:24:00.547 [2024-06-10 10:01:54.180892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.547 [2024-06-10 10:01:54.195129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.547 [2024-06-10 10:01:54.195219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:00.547 [2024-06-10 10:01:54.195255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.028 ms 00:24:00.547 [2024-06-10 10:01:54.195268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.547 [2024-06-10 10:01:54.216450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.547 [2024-06-10 10:01:54.216506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:00.548 [2024-06-10 10:01:54.216526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.155 ms 00:24:00.548 [2024-06-10 10:01:54.216539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.548 [2024-06-10 10:01:54.223431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.548 [2024-06-10 10:01:54.223495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:00.548 [2024-06-10 10:01:54.223511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.843 ms 00:24:00.548 [2024-06-10 10:01:54.223523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.548 [2024-06-10 10:01:54.255165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.548 [2024-06-10 10:01:54.255245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:00.548 [2024-06-10 10:01:54.255280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.559 ms 00:24:00.548 [2024-06-10 10:01:54.255292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.548 [2024-06-10 10:01:54.273162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.548 [2024-06-10 10:01:54.273225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:00.548 [2024-06-10 10:01:54.273260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.823 ms 00:24:00.548 [2024-06-10 10:01:54.273272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.807 [2024-06-10 10:01:54.363246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.807 [2024-06-10 10:01:54.363350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:00.807 [2024-06-10 10:01:54.363418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.902 ms 00:24:00.807 [2024-06-10 10:01:54.363459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.807 [2024-06-10 10:01:54.394683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.807 [2024-06-10 10:01:54.394776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:00.807 [2024-06-10 10:01:54.394795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.191 ms 00:24:00.807 [2024-06-10 10:01:54.394808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.807 [2024-06-10 10:01:54.424969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.807 [2024-06-10 10:01:54.425018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:00.807 [2024-06-10 10:01:54.425037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.114 ms 00:24:00.807 [2024-06-10 10:01:54.425066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.808 [2024-06-10 10:01:54.457046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.808 [2024-06-10 10:01:54.457096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:00.808 [2024-06-10 10:01:54.457127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.913 ms 00:24:00.808 [2024-06-10 10:01:54.457139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.808 [2024-06-10 10:01:54.488578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.808 [2024-06-10 10:01:54.488638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:00.808 [2024-06-10 10:01:54.488672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.327 ms 00:24:00.808 [2024-06-10 10:01:54.488684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.808 [2024-06-10 10:01:54.488728] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:00.808 [2024-06-10 10:01:54.488753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103168 / 261120 wr_cnt: 1 state: open 00:24:00.808 [2024-06-10 10:01:54.488768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.488988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:00.808 [2024-06-10 10:01:54.489764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.489998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:00.809 [2024-06-10 10:01:54.490019] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:00.809 [2024-06-10 10:01:54.490031] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76e15599-d612-4b12-a5b7-08522e300726 00:24:00.809 [2024-06-10 10:01:54.490048] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103168 00:24:00.809 [2024-06-10 10:01:54.490060] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104128 00:24:00.809 [2024-06-10 10:01:54.490071] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103168 00:24:00.809 [2024-06-10 10:01:54.490085] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:24:00.809 [2024-06-10 10:01:54.490096] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:00.809 [2024-06-10 10:01:54.490119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:00.809 [2024-06-10 10:01:54.490132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:00.809 [2024-06-10 10:01:54.490143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:00.809 [2024-06-10 10:01:54.490167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:00.809 [2024-06-10 10:01:54.490180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.809 [2024-06-10 10:01:54.490192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:00.809 [2024-06-10 10:01:54.490205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:24:00.809 [2024-06-10 10:01:54.490217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.809 [2024-06-10 10:01:54.506786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.809 [2024-06-10 10:01:54.506838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:00.809 [2024-06-10 10:01:54.506871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.511 ms 00:24:00.809 [2024-06-10 10:01:54.506883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.809 [2024-06-10 10:01:54.507131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.809 [2024-06-10 10:01:54.507179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:00.809 [2024-06-10 10:01:54.507194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:24:00.809 [2024-06-10 10:01:54.507213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.809 [2024-06-10 10:01:54.552685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.809 [2024-06-10 10:01:54.552772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.809 [2024-06-10 10:01:54.552806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.809 [2024-06-10 10:01:54.552819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.809 [2024-06-10 10:01:54.552910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.809 [2024-06-10 10:01:54.552927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.809 [2024-06-10 10:01:54.552940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.809 [2024-06-10 10:01:54.552959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.809 [2024-06-10 10:01:54.553080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.809 [2024-06-10 10:01:54.553101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.809 [2024-06-10 10:01:54.553115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.809 [2024-06-10 10:01:54.553127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.809 [2024-06-10 10:01:54.553168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.809 [2024-06-10 10:01:54.553185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.809 [2024-06-10 10:01:54.553197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.809 [2024-06-10 10:01:54.553209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.068 [2024-06-10 10:01:54.651139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.068 [2024-06-10 10:01:54.651213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:01.068 [2024-06-10 10:01:54.651234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.068 [2024-06-10 10:01:54.651246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.068 [2024-06-10 10:01:54.689802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.068 [2024-06-10 10:01:54.689860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:01.068 [2024-06-10 10:01:54.689880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.068 [2024-06-10 10:01:54.689901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.068 [2024-06-10 10:01:54.690000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.068 [2024-06-10 10:01:54.690018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:01.068 [2024-06-10 10:01:54.690031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.068 [2024-06-10 10:01:54.690058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.068 [2024-06-10 10:01:54.690157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.068 [2024-06-10 10:01:54.690199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:01.068 [2024-06-10 10:01:54.690213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.068 [2024-06-10 10:01:54.690226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.068 [2024-06-10 10:01:54.690418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.068 [2024-06-10 10:01:54.690447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:01.068 [2024-06-10 10:01:54.690462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.068 [2024-06-10 10:01:54.690474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.068 [2024-06-10 10:01:54.690526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.068 [2024-06-10 10:01:54.690545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:01.068 [2024-06-10 10:01:54.690557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.068 [2024-06-10 10:01:54.690569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.068 [2024-06-10 10:01:54.690613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.068 [2024-06-10 10:01:54.690636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:01.068 [2024-06-10 10:01:54.690648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.068 [2024-06-10 10:01:54.690660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.068 [2024-06-10 10:01:54.690712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.068 [2024-06-10 10:01:54.690729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:01.068 [2024-06-10 10:01:54.690741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.068 [2024-06-10 10:01:54.690753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.068 [2024-06-10 10:01:54.690895] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.555 ms, result 0 00:24:02.972 00:24:02.972 00:24:02.972 10:01:56 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:04.875 10:01:58 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:04.875 [2024-06-10 10:01:58.416436] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:04.875 [2024-06-10 10:01:58.416611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77823 ] 00:24:04.875 [2024-06-10 10:01:58.589648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.134 [2024-06-10 10:01:58.810956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.393 [2024-06-10 10:01:59.088278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:05.393 [2024-06-10 10:01:59.088389] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:05.691 [2024-06-10 10:01:59.242396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.691 [2024-06-10 10:01:59.242483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:05.691 [2024-06-10 10:01:59.242518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:05.691 [2024-06-10 10:01:59.242530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.691 [2024-06-10 10:01:59.242595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.691 [2024-06-10 10:01:59.242613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.691 [2024-06-10 10:01:59.242625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:05.691 [2024-06-10 10:01:59.242636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.691 [2024-06-10 10:01:59.242665] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:05.692 [2024-06-10 10:01:59.243639] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:05.692 [2024-06-10 10:01:59.243683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.243697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.692 [2024-06-10 10:01:59.243710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:24:05.692 [2024-06-10 10:01:59.243722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.245066] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:05.692 [2024-06-10 10:01:59.260657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.260713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:05.692 [2024-06-10 10:01:59.260767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.592 ms 00:24:05.692 [2024-06-10 10:01:59.260778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.260846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.260865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:05.692 [2024-06-10 10:01:59.260877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:05.692 [2024-06-10 10:01:59.260888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.265625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.265680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.692 [2024-06-10 10:01:59.265711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.616 ms 00:24:05.692 [2024-06-10 10:01:59.265721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.265825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.265844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.692 [2024-06-10 10:01:59.265855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:05.692 [2024-06-10 10:01:59.265865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.265920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.265973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:05.692 [2024-06-10 10:01:59.265985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:05.692 [2024-06-10 10:01:59.265996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.266032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:05.692 [2024-06-10 10:01:59.270581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.270633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.692 [2024-06-10 10:01:59.270665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.561 ms 00:24:05.692 [2024-06-10 10:01:59.270676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.270725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.270741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:05.692 [2024-06-10 10:01:59.270753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:05.692 [2024-06-10 10:01:59.270764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.270805] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:05.692 [2024-06-10 10:01:59.270838] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:05.692 [2024-06-10 10:01:59.270879] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:05.692 [2024-06-10 10:01:59.270905] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:05.692 [2024-06-10 10:01:59.270988] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:05.692 [2024-06-10 10:01:59.271004] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:05.692 [2024-06-10 10:01:59.271019] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:05.692 [2024-06-10 10:01:59.271033] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271045] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271062] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:05.692 [2024-06-10 10:01:59.271073] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:05.692 [2024-06-10 10:01:59.271083] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:05.692 [2024-06-10 10:01:59.271094] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:05.692 [2024-06-10 10:01:59.271105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.271131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:05.692 [2024-06-10 10:01:59.271161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:24:05.692 [2024-06-10 10:01:59.271173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.271249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.692 [2024-06-10 10:01:59.271264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:05.692 [2024-06-10 10:01:59.271279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:05.692 [2024-06-10 10:01:59.271289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.692 [2024-06-10 10:01:59.271407] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:05.692 [2024-06-10 10:01:59.271444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:05.692 [2024-06-10 10:01:59.271459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271482] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:05.692 [2024-06-10 10:01:59.271491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271513] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:05.692 [2024-06-10 10:01:59.271523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:05.692 [2024-06-10 10:01:59.271543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:05.692 [2024-06-10 10:01:59.271552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:05.692 [2024-06-10 10:01:59.271562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:05.692 [2024-06-10 10:01:59.271572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:05.692 [2024-06-10 10:01:59.271583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:05.692 [2024-06-10 10:01:59.271594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:05.692 [2024-06-10 10:01:59.271613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:05.692 [2024-06-10 10:01:59.271623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:05.692 [2024-06-10 10:01:59.271643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:05.692 [2024-06-10 10:01:59.271667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:05.692 [2024-06-10 10:01:59.271688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:05.692 [2024-06-10 10:01:59.271717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:05.692 [2024-06-10 10:01:59.271747] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:05.692 [2024-06-10 10:01:59.271776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:05.692 [2024-06-10 10:01:59.271806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:05.692 [2024-06-10 10:01:59.271841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:05.692 [2024-06-10 10:01:59.271851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:05.692 [2024-06-10 10:01:59.271861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:05.692 [2024-06-10 10:01:59.271870] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:05.692 [2024-06-10 10:01:59.271881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:05.692 [2024-06-10 10:01:59.271891] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:05.692 [2024-06-10 10:01:59.271906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.692 [2024-06-10 10:01:59.271918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:05.692 [2024-06-10 10:01:59.271929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:05.692 [2024-06-10 10:01:59.271939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:05.692 [2024-06-10 10:01:59.271949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:05.692 [2024-06-10 10:01:59.271959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:05.692 [2024-06-10 10:01:59.271969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:05.693 [2024-06-10 10:01:59.271980] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:05.693 [2024-06-10 10:01:59.271993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:05.693 [2024-06-10 10:01:59.272005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:05.693 [2024-06-10 10:01:59.272015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:05.693 [2024-06-10 10:01:59.272026] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:05.693 [2024-06-10 10:01:59.272037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:05.693 [2024-06-10 10:01:59.272062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:05.693 [2024-06-10 10:01:59.272072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:05.693 [2024-06-10 10:01:59.272083] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:05.693 [2024-06-10 10:01:59.272093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:05.693 [2024-06-10 10:01:59.272103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:05.693 [2024-06-10 10:01:59.272130] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:05.693 [2024-06-10 10:01:59.272141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:05.693 [2024-06-10 10:01:59.272168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:05.693 [2024-06-10 10:01:59.272182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:05.693 [2024-06-10 10:01:59.272193] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:05.693 [2024-06-10 10:01:59.272205] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:05.693 [2024-06-10 10:01:59.272216] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:05.693 [2024-06-10 10:01:59.272228] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:05.693 [2024-06-10 10:01:59.272240] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:05.693 [2024-06-10 10:01:59.272251] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:05.693 [2024-06-10 10:01:59.272263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.272274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:05.693 [2024-06-10 10:01:59.272286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:24:05.693 [2024-06-10 10:01:59.272297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.290448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.290541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.693 [2024-06-10 10:01:59.290573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.092 ms 00:24:05.693 [2024-06-10 10:01:59.290584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.290676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.290697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:05.693 [2024-06-10 10:01:59.290708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:05.693 [2024-06-10 10:01:59.290718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.341149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.341220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.693 [2024-06-10 10:01:59.341254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.335 ms 00:24:05.693 [2024-06-10 10:01:59.341270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.341339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.341355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.693 [2024-06-10 10:01:59.341367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:05.693 [2024-06-10 10:01:59.341378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.341801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.341830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.693 [2024-06-10 10:01:59.341843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:24:05.693 [2024-06-10 10:01:59.341854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.341997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.342016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.693 [2024-06-10 10:01:59.342028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:24:05.693 [2024-06-10 10:01:59.342038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.357830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.357884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.693 [2024-06-10 10:01:59.357916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.765 ms 00:24:05.693 [2024-06-10 10:01:59.357927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.372402] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:05.693 [2024-06-10 10:01:59.372456] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:05.693 [2024-06-10 10:01:59.372488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.372499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:05.693 [2024-06-10 10:01:59.372510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.442 ms 00:24:05.693 [2024-06-10 10:01:59.372520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.399941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.399996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:05.693 [2024-06-10 10:01:59.400028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.376 ms 00:24:05.693 [2024-06-10 10:01:59.400039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.416074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.416145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:05.693 [2024-06-10 10:01:59.416164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.004 ms 00:24:05.693 [2024-06-10 10:01:59.416175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.431245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.431301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:05.693 [2024-06-10 10:01:59.431332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.024 ms 00:24:05.693 [2024-06-10 10:01:59.431343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.693 [2024-06-10 10:01:59.431859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.693 [2024-06-10 10:01:59.431892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:05.693 [2024-06-10 10:01:59.431907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:24:05.693 [2024-06-10 10:01:59.431919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.968 [2024-06-10 10:01:59.507351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.968 [2024-06-10 10:01:59.507450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:05.968 [2024-06-10 10:01:59.507488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.401 ms 00:24:05.968 [2024-06-10 10:01:59.507500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.968 [2024-06-10 10:01:59.518757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:05.968 [2024-06-10 10:01:59.521226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.968 [2024-06-10 10:01:59.521272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:05.968 [2024-06-10 10:01:59.521303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.659 ms 00:24:05.969 [2024-06-10 10:01:59.521313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.969 [2024-06-10 10:01:59.521409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.969 [2024-06-10 10:01:59.521430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:05.969 [2024-06-10 10:01:59.521442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:05.969 [2024-06-10 10:01:59.521452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.969 [2024-06-10 10:01:59.522659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.969 [2024-06-10 10:01:59.522710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:05.969 [2024-06-10 10:01:59.522740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:24:05.969 [2024-06-10 10:01:59.522750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.969 [2024-06-10 10:01:59.524681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.969 [2024-06-10 10:01:59.524747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:05.969 [2024-06-10 10:01:59.524786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.899 ms 00:24:05.969 [2024-06-10 10:01:59.524796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.969 [2024-06-10 10:01:59.524833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.969 [2024-06-10 10:01:59.524848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:05.969 [2024-06-10 10:01:59.524859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:05.969 [2024-06-10 10:01:59.524877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.969 [2024-06-10 10:01:59.524918] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:05.969 [2024-06-10 10:01:59.524935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.969 [2024-06-10 10:01:59.524945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:05.969 [2024-06-10 10:01:59.524957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:05.969 [2024-06-10 10:01:59.524972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.969 [2024-06-10 10:01:59.557986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.969 [2024-06-10 10:01:59.558054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:05.969 [2024-06-10 10:01:59.558074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.972 ms 00:24:05.969 [2024-06-10 10:01:59.558086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.969 [2024-06-10 10:01:59.558185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.969 [2024-06-10 10:01:59.558241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:05.969 [2024-06-10 10:01:59.558253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:05.969 [2024-06-10 10:01:59.558264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.969 [2024-06-10 10:01:59.567007] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 322.312 ms, result 0 00:24:44.813  Copying: 824/1048576 [kB] (824 kBps) Copying: 3288/1048576 [kB] (2464 kBps) Copying: 16/1024 [MB] (13 MBps) Copying: 44/1024 [MB] (28 MBps) Copying: 73/1024 [MB] (28 MBps) Copying: 101/1024 [MB] (28 MBps) Copying: 129/1024 [MB] (28 MBps) Copying: 157/1024 [MB] (27 MBps) Copying: 185/1024 [MB] (27 MBps) Copying: 213/1024 [MB] (28 MBps) Copying: 241/1024 [MB] (28 MBps) Copying: 269/1024 [MB] (27 MBps) Copying: 298/1024 [MB] (28 MBps) Copying: 327/1024 [MB] (28 MBps) Copying: 356/1024 [MB] (29 MBps) Copying: 386/1024 [MB] (29 MBps) Copying: 414/1024 [MB] (28 MBps) Copying: 443/1024 [MB] (28 MBps) Copying: 473/1024 [MB] (29 MBps) Copying: 502/1024 [MB] (29 MBps) Copying: 531/1024 [MB] (29 MBps) Copying: 560/1024 [MB] (29 MBps) Copying: 590/1024 [MB] (29 MBps) Copying: 619/1024 [MB] (29 MBps) Copying: 648/1024 [MB] (28 MBps) Copying: 677/1024 [MB] (29 MBps) Copying: 707/1024 [MB] (29 MBps) Copying: 735/1024 [MB] (28 MBps) Copying: 763/1024 [MB] (28 MBps) Copying: 792/1024 [MB] (29 MBps) Copying: 821/1024 [MB] (28 MBps) Copying: 850/1024 [MB] (28 MBps) Copying: 879/1024 [MB] (29 MBps) Copying: 908/1024 [MB] (29 MBps) Copying: 937/1024 [MB] (28 MBps) Copying: 967/1024 [MB] (29 MBps) Copying: 996/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-06-10 10:02:38.299974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.300080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:44.813 [2024-06-10 10:02:38.300187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:44.813 [2024-06-10 10:02:38.300210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-06-10 10:02:38.300264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:44.813 [2024-06-10 10:02:38.307467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.307537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:44.813 [2024-06-10 10:02:38.307578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.164 ms 00:24:44.813 [2024-06-10 10:02:38.307602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-06-10 10:02:38.308160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.308220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:44.813 [2024-06-10 10:02:38.308258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:24:44.813 [2024-06-10 10:02:38.308279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-06-10 10:02:38.323646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.323698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:44.813 [2024-06-10 10:02:38.323718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.326 ms 00:24:44.813 [2024-06-10 10:02:38.323733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-06-10 10:02:38.332078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.332130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:44.813 [2024-06-10 10:02:38.332149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.282 ms 00:24:44.813 [2024-06-10 10:02:38.332172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-06-10 10:02:38.370139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.370200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:44.813 [2024-06-10 10:02:38.370221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.904 ms 00:24:44.813 [2024-06-10 10:02:38.370235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-06-10 10:02:38.391203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.391261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:44.813 [2024-06-10 10:02:38.391291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.916 ms 00:24:44.813 [2024-06-10 10:02:38.391305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-06-10 10:02:38.394464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.394515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:44.813 [2024-06-10 10:02:38.394534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.103 ms 00:24:44.813 [2024-06-10 10:02:38.394548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-06-10 10:02:38.432780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.432840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:44.813 [2024-06-10 10:02:38.432871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.197 ms 00:24:44.813 [2024-06-10 10:02:38.432895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-06-10 10:02:38.470597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-06-10 10:02:38.470657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:44.814 [2024-06-10 10:02:38.470682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.649 ms 00:24:44.814 [2024-06-10 10:02:38.470694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.814 [2024-06-10 10:02:38.508047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.814 [2024-06-10 10:02:38.508130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:44.814 [2024-06-10 10:02:38.508152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.293 ms 00:24:44.814 [2024-06-10 10:02:38.508165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.814 [2024-06-10 10:02:38.545460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.814 [2024-06-10 10:02:38.545509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:44.814 [2024-06-10 10:02:38.545537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.166 ms 00:24:44.814 [2024-06-10 10:02:38.545550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.814 [2024-06-10 10:02:38.545599] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:44.814 [2024-06-10 10:02:38.545626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:44.814 [2024-06-10 10:02:38.545647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:24:44.814 [2024-06-10 10:02:38.545662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.545986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.546993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.547008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.547021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.547035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:44.814 [2024-06-10 10:02:38.547059] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:44.814 [2024-06-10 10:02:38.547072] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76e15599-d612-4b12-a5b7-08522e300726 00:24:44.814 [2024-06-10 10:02:38.547086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:24:44.814 [2024-06-10 10:02:38.547099] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 163264 00:24:44.814 [2024-06-10 10:02:38.547123] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 161280 00:24:44.814 [2024-06-10 10:02:38.547146] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0123 00:24:44.814 [2024-06-10 10:02:38.547159] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:44.814 [2024-06-10 10:02:38.547172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:44.814 [2024-06-10 10:02:38.547185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:44.814 [2024-06-10 10:02:38.547198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:44.814 [2024-06-10 10:02:38.547209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:44.814 [2024-06-10 10:02:38.547223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.814 [2024-06-10 10:02:38.547237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:44.814 [2024-06-10 10:02:38.547250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:24:44.814 [2024-06-10 10:02:38.547263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.814 [2024-06-10 10:02:38.567309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.814 [2024-06-10 10:02:38.567354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:44.814 [2024-06-10 10:02:38.567392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.968 ms 00:24:44.814 [2024-06-10 10:02:38.567406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.814 [2024-06-10 10:02:38.567701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.814 [2024-06-10 10:02:38.567737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:44.814 [2024-06-10 10:02:38.567754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:24:44.814 [2024-06-10 10:02:38.567768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.072 [2024-06-10 10:02:38.623434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.072 [2024-06-10 10:02:38.623509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:45.072 [2024-06-10 10:02:38.623528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.072 [2024-06-10 10:02:38.623542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.072 [2024-06-10 10:02:38.623614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.072 [2024-06-10 10:02:38.623631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:45.072 [2024-06-10 10:02:38.623646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.072 [2024-06-10 10:02:38.623658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.072 [2024-06-10 10:02:38.623759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.072 [2024-06-10 10:02:38.623789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:45.072 [2024-06-10 10:02:38.623804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.072 [2024-06-10 10:02:38.623817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.072 [2024-06-10 10:02:38.623844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.072 [2024-06-10 10:02:38.623859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:45.072 [2024-06-10 10:02:38.623872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.072 [2024-06-10 10:02:38.623885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.072 [2024-06-10 10:02:38.727793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.072 [2024-06-10 10:02:38.727890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:45.072 [2024-06-10 10:02:38.727924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.072 [2024-06-10 10:02:38.727934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.072 [2024-06-10 10:02:38.764723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.073 [2024-06-10 10:02:38.764776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:45.073 [2024-06-10 10:02:38.764808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.073 [2024-06-10 10:02:38.764818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.073 [2024-06-10 10:02:38.764891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.073 [2024-06-10 10:02:38.764908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:45.073 [2024-06-10 10:02:38.764927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.073 [2024-06-10 10:02:38.764937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.073 [2024-06-10 10:02:38.764987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.073 [2024-06-10 10:02:38.765018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:45.073 [2024-06-10 10:02:38.765045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.073 [2024-06-10 10:02:38.765055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.073 [2024-06-10 10:02:38.765202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.073 [2024-06-10 10:02:38.765223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:45.073 [2024-06-10 10:02:38.765244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.073 [2024-06-10 10:02:38.765264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.073 [2024-06-10 10:02:38.765317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.073 [2024-06-10 10:02:38.765335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:45.073 [2024-06-10 10:02:38.765347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.073 [2024-06-10 10:02:38.765358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.073 [2024-06-10 10:02:38.765400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.073 [2024-06-10 10:02:38.765415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:45.073 [2024-06-10 10:02:38.765427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.073 [2024-06-10 10:02:38.765444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.073 [2024-06-10 10:02:38.765497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.073 [2024-06-10 10:02:38.765514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:45.073 [2024-06-10 10:02:38.765525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.073 [2024-06-10 10:02:38.765536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.073 [2024-06-10 10:02:38.765706] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 465.735 ms, result 0 00:24:46.448 00:24:46.448 00:24:46.448 10:02:39 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:48.346 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:48.346 10:02:41 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:48.346 [2024-06-10 10:02:41.946289] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:48.346 [2024-06-10 10:02:41.946456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78250 ] 00:24:48.603 [2024-06-10 10:02:42.116864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.603 [2024-06-10 10:02:42.307258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.860 [2024-06-10 10:02:42.598502] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:48.860 [2024-06-10 10:02:42.598616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.120 [2024-06-10 10:02:42.752653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.752730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:49.120 [2024-06-10 10:02:42.752752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:49.120 [2024-06-10 10:02:42.752766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.752838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.752859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.120 [2024-06-10 10:02:42.752873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:49.120 [2024-06-10 10:02:42.752885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.752919] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:49.120 [2024-06-10 10:02:42.753849] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:49.120 [2024-06-10 10:02:42.753895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.753910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.120 [2024-06-10 10:02:42.753924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:24:49.120 [2024-06-10 10:02:42.753937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.755195] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:49.120 [2024-06-10 10:02:42.771127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.771183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:49.120 [2024-06-10 10:02:42.771222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.934 ms 00:24:49.120 [2024-06-10 10:02:42.771235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.771310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.771332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:49.120 [2024-06-10 10:02:42.771345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:49.120 [2024-06-10 10:02:42.771356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.775980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.776024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.120 [2024-06-10 10:02:42.776041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.492 ms 00:24:49.120 [2024-06-10 10:02:42.776052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.776173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.776227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.120 [2024-06-10 10:02:42.776241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:49.120 [2024-06-10 10:02:42.776253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.776314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.776338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:49.120 [2024-06-10 10:02:42.776352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:49.120 [2024-06-10 10:02:42.776364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.776405] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.120 [2024-06-10 10:02:42.780555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.780606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.120 [2024-06-10 10:02:42.780638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.165 ms 00:24:49.120 [2024-06-10 10:02:42.780650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.780692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.120 [2024-06-10 10:02:42.780708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:49.120 [2024-06-10 10:02:42.780720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:49.120 [2024-06-10 10:02:42.780731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.120 [2024-06-10 10:02:42.780773] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:49.120 [2024-06-10 10:02:42.780807] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:49.120 [2024-06-10 10:02:42.780862] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:49.120 [2024-06-10 10:02:42.780899] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:49.120 [2024-06-10 10:02:42.780981] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:49.120 [2024-06-10 10:02:42.780998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:49.120 [2024-06-10 10:02:42.781013] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:49.120 [2024-06-10 10:02:42.781029] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:49.120 [2024-06-10 10:02:42.781042] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:49.120 [2024-06-10 10:02:42.781060] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:49.120 [2024-06-10 10:02:42.781072] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:49.121 [2024-06-10 10:02:42.781084] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:49.121 [2024-06-10 10:02:42.781096] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:49.121 [2024-06-10 10:02:42.781108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.121 [2024-06-10 10:02:42.781120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:49.121 [2024-06-10 10:02:42.781133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:24:49.121 [2024-06-10 10:02:42.781145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.121 [2024-06-10 10:02:42.781236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.121 [2024-06-10 10:02:42.781253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:49.121 [2024-06-10 10:02:42.781269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:49.121 [2024-06-10 10:02:42.781281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.121 [2024-06-10 10:02:42.781392] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:49.121 [2024-06-10 10:02:42.781422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:49.121 [2024-06-10 10:02:42.781437] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.121 [2024-06-10 10:02:42.781450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:49.121 [2024-06-10 10:02:42.781473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:49.121 [2024-06-10 10:02:42.781495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:49.121 [2024-06-10 10:02:42.781506] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.121 [2024-06-10 10:02:42.781528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:49.121 [2024-06-10 10:02:42.781539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:49.121 [2024-06-10 10:02:42.781549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.121 [2024-06-10 10:02:42.781560] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:49.121 [2024-06-10 10:02:42.781572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:49.121 [2024-06-10 10:02:42.781583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:49.121 [2024-06-10 10:02:42.781605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:49.121 [2024-06-10 10:02:42.781615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:49.121 [2024-06-10 10:02:42.781637] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:49.121 [2024-06-10 10:02:42.781663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:49.121 [2024-06-10 10:02:42.781675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:49.121 [2024-06-10 10:02:42.781687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:49.121 [2024-06-10 10:02:42.781708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:49.121 [2024-06-10 10:02:42.781719] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:49.121 [2024-06-10 10:02:42.781741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:49.121 [2024-06-10 10:02:42.781751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:49.121 [2024-06-10 10:02:42.781773] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:49.121 [2024-06-10 10:02:42.781784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:49.121 [2024-06-10 10:02:42.781805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:49.121 [2024-06-10 10:02:42.781816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.121 [2024-06-10 10:02:42.781838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:49.121 [2024-06-10 10:02:42.781849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:49.121 [2024-06-10 10:02:42.781859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.121 [2024-06-10 10:02:42.781870] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:49.121 [2024-06-10 10:02:42.781881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:49.121 [2024-06-10 10:02:42.781893] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.121 [2024-06-10 10:02:42.781910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.121 [2024-06-10 10:02:42.781922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:49.121 [2024-06-10 10:02:42.781934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:49.121 [2024-06-10 10:02:42.781946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:49.121 [2024-06-10 10:02:42.781957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:49.121 [2024-06-10 10:02:42.781968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:49.121 [2024-06-10 10:02:42.781979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:49.121 [2024-06-10 10:02:42.781992] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:49.121 [2024-06-10 10:02:42.782006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.121 [2024-06-10 10:02:42.782019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:49.121 [2024-06-10 10:02:42.782031] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:49.121 [2024-06-10 10:02:42.782043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:49.121 [2024-06-10 10:02:42.782055] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:49.121 [2024-06-10 10:02:42.782067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:49.121 [2024-06-10 10:02:42.782078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:49.121 [2024-06-10 10:02:42.782090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:49.121 [2024-06-10 10:02:42.782102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:49.121 [2024-06-10 10:02:42.782137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:49.121 [2024-06-10 10:02:42.782150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:49.121 [2024-06-10 10:02:42.782162] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:49.121 [2024-06-10 10:02:42.782174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:49.121 [2024-06-10 10:02:42.782186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:49.121 [2024-06-10 10:02:42.782197] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:49.121 [2024-06-10 10:02:42.782211] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.121 [2024-06-10 10:02:42.782223] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:49.122 [2024-06-10 10:02:42.782235] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:49.122 [2024-06-10 10:02:42.782247] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:49.122 [2024-06-10 10:02:42.782259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:49.122 [2024-06-10 10:02:42.782272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.122 [2024-06-10 10:02:42.782285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:49.122 [2024-06-10 10:02:42.782297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:24:49.122 [2024-06-10 10:02:42.782309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.122 [2024-06-10 10:02:42.799613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.122 [2024-06-10 10:02:42.799677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.122 [2024-06-10 10:02:42.799696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.247 ms 00:24:49.122 [2024-06-10 10:02:42.799709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.122 [2024-06-10 10:02:42.799840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.122 [2024-06-10 10:02:42.799863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:49.122 [2024-06-10 10:02:42.799875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:24:49.122 [2024-06-10 10:02:42.799886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.122 [2024-06-10 10:02:42.848847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.122 [2024-06-10 10:02:42.848922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.122 [2024-06-10 10:02:42.848958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.856 ms 00:24:49.122 [2024-06-10 10:02:42.848976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.122 [2024-06-10 10:02:42.849051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.122 [2024-06-10 10:02:42.849069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.122 [2024-06-10 10:02:42.849083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:49.122 [2024-06-10 10:02:42.849094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.122 [2024-06-10 10:02:42.849512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.122 [2024-06-10 10:02:42.849545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.122 [2024-06-10 10:02:42.849560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:24:49.122 [2024-06-10 10:02:42.849572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.122 [2024-06-10 10:02:42.849727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.122 [2024-06-10 10:02:42.849758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.122 [2024-06-10 10:02:42.849774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:24:49.122 [2024-06-10 10:02:42.849785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.122 [2024-06-10 10:02:42.866177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.122 [2024-06-10 10:02:42.866251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.122 [2024-06-10 10:02:42.866286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.362 ms 00:24:49.122 [2024-06-10 10:02:42.866298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.122 [2024-06-10 10:02:42.881998] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:49.122 [2024-06-10 10:02:42.882045] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:49.122 [2024-06-10 10:02:42.882065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.122 [2024-06-10 10:02:42.882078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:49.122 [2024-06-10 10:02:42.882092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.642 ms 00:24:49.122 [2024-06-10 10:02:42.882115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.380 [2024-06-10 10:02:42.910353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.380 [2024-06-10 10:02:42.910409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:49.380 [2024-06-10 10:02:42.910443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.187 ms 00:24:49.380 [2024-06-10 10:02:42.910455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.380 [2024-06-10 10:02:42.925483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.380 [2024-06-10 10:02:42.925567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:49.380 [2024-06-10 10:02:42.925602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.973 ms 00:24:49.380 [2024-06-10 10:02:42.925613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.380 [2024-06-10 10:02:42.940698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.380 [2024-06-10 10:02:42.940752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:49.381 [2024-06-10 10:02:42.940785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.041 ms 00:24:49.381 [2024-06-10 10:02:42.940796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:42.941305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:42.941346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:49.381 [2024-06-10 10:02:42.941362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:24:49.381 [2024-06-10 10:02:42.941374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.015803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:43.015909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:49.381 [2024-06-10 10:02:43.015930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.401 ms 00:24:49.381 [2024-06-10 10:02:43.015943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.028545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:49.381 [2024-06-10 10:02:43.031136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:43.031193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:49.381 [2024-06-10 10:02:43.031212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.122 ms 00:24:49.381 [2024-06-10 10:02:43.031225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.031330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:43.031354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:49.381 [2024-06-10 10:02:43.031368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:49.381 [2024-06-10 10:02:43.031380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.032028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:43.032094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:49.381 [2024-06-10 10:02:43.032111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:24:49.381 [2024-06-10 10:02:43.032137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.034190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:43.034259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:49.381 [2024-06-10 10:02:43.034297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:24:49.381 [2024-06-10 10:02:43.034309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.034347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:43.034363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:49.381 [2024-06-10 10:02:43.034377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:49.381 [2024-06-10 10:02:43.034394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.034438] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:49.381 [2024-06-10 10:02:43.034456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:43.034468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:49.381 [2024-06-10 10:02:43.034480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:49.381 [2024-06-10 10:02:43.034496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.064678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:43.064737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:49.381 [2024-06-10 10:02:43.064772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.156 ms 00:24:49.381 [2024-06-10 10:02:43.064783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.064878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.381 [2024-06-10 10:02:43.064905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:49.381 [2024-06-10 10:02:43.064919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:49.381 [2024-06-10 10:02:43.064930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.381 [2024-06-10 10:02:43.066168] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.971 ms, result 0 00:25:29.930  Copying: 25/1024 [MB] (25 MBps) Copying: 50/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 100/1024 [MB] (24 MBps) Copying: 126/1024 [MB] (25 MBps) Copying: 152/1024 [MB] (26 MBps) Copying: 177/1024 [MB] (25 MBps) Copying: 202/1024 [MB] (24 MBps) Copying: 226/1024 [MB] (24 MBps) Copying: 251/1024 [MB] (24 MBps) Copying: 275/1024 [MB] (24 MBps) Copying: 299/1024 [MB] (24 MBps) Copying: 324/1024 [MB] (24 MBps) Copying: 348/1024 [MB] (24 MBps) Copying: 373/1024 [MB] (24 MBps) Copying: 397/1024 [MB] (23 MBps) Copying: 420/1024 [MB] (23 MBps) Copying: 445/1024 [MB] (24 MBps) Copying: 471/1024 [MB] (26 MBps) Copying: 498/1024 [MB] (26 MBps) Copying: 523/1024 [MB] (25 MBps) Copying: 549/1024 [MB] (25 MBps) Copying: 574/1024 [MB] (25 MBps) Copying: 600/1024 [MB] (25 MBps) Copying: 625/1024 [MB] (25 MBps) Copying: 652/1024 [MB] (26 MBps) Copying: 678/1024 [MB] (26 MBps) Copying: 705/1024 [MB] (26 MBps) Copying: 731/1024 [MB] (25 MBps) Copying: 756/1024 [MB] (25 MBps) Copying: 782/1024 [MB] (25 MBps) Copying: 808/1024 [MB] (26 MBps) Copying: 835/1024 [MB] (26 MBps) Copying: 861/1024 [MB] (26 MBps) Copying: 887/1024 [MB] (26 MBps) Copying: 913/1024 [MB] (26 MBps) Copying: 938/1024 [MB] (24 MBps) Copying: 963/1024 [MB] (25 MBps) Copying: 988/1024 [MB] (24 MBps) Copying: 1013/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-06-10 10:03:23.677238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.930 [2024-06-10 10:03:23.677325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:29.930 [2024-06-10 10:03:23.677347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:29.930 [2024-06-10 10:03:23.677360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.930 [2024-06-10 10:03:23.677393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:29.930 [2024-06-10 10:03:23.681138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.930 [2024-06-10 10:03:23.681182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:29.930 [2024-06-10 10:03:23.681202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.718 ms 00:25:29.930 [2024-06-10 10:03:23.681225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.930 [2024-06-10 10:03:23.681547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.930 [2024-06-10 10:03:23.681581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:29.930 [2024-06-10 10:03:23.681599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:25:29.930 [2024-06-10 10:03:23.681613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.930 [2024-06-10 10:03:23.686035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.930 [2024-06-10 10:03:23.686098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:29.930 [2024-06-10 10:03:23.686140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.395 ms 00:25:29.930 [2024-06-10 10:03:23.686156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.930 [2024-06-10 10:03:23.695397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.930 [2024-06-10 10:03:23.695477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:29.930 [2024-06-10 10:03:23.695496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.205 ms 00:25:29.930 [2024-06-10 10:03:23.695509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.190 [2024-06-10 10:03:23.726108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.190 [2024-06-10 10:03:23.726159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:30.190 [2024-06-10 10:03:23.726178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.507 ms 00:25:30.190 [2024-06-10 10:03:23.726190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.190 [2024-06-10 10:03:23.743213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.190 [2024-06-10 10:03:23.743267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:30.190 [2024-06-10 10:03:23.743300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.977 ms 00:25:30.190 [2024-06-10 10:03:23.743312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.190 [2024-06-10 10:03:23.746714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.190 [2024-06-10 10:03:23.746780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:30.190 [2024-06-10 10:03:23.746798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.356 ms 00:25:30.190 [2024-06-10 10:03:23.746811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.190 [2024-06-10 10:03:23.777320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.190 [2024-06-10 10:03:23.777376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:30.190 [2024-06-10 10:03:23.777409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.485 ms 00:25:30.190 [2024-06-10 10:03:23.777421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.190 [2024-06-10 10:03:23.807401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.190 [2024-06-10 10:03:23.807477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:30.190 [2024-06-10 10:03:23.807495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.936 ms 00:25:30.190 [2024-06-10 10:03:23.807507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.190 [2024-06-10 10:03:23.838403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.190 [2024-06-10 10:03:23.838448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:30.190 [2024-06-10 10:03:23.838480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.851 ms 00:25:30.190 [2024-06-10 10:03:23.838492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.190 [2024-06-10 10:03:23.869216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.190 [2024-06-10 10:03:23.869268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:30.190 [2024-06-10 10:03:23.869300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.598 ms 00:25:30.190 [2024-06-10 10:03:23.869311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.190 [2024-06-10 10:03:23.869351] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:30.190 [2024-06-10 10:03:23.869373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:30.190 [2024-06-10 10:03:23.869387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:25:30.190 [2024-06-10 10:03:23.869399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:30.190 [2024-06-10 10:03:23.869717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.869990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:30.191 [2024-06-10 10:03:23.870642] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:30.191 [2024-06-10 10:03:23.870654] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76e15599-d612-4b12-a5b7-08522e300726 00:25:30.191 [2024-06-10 10:03:23.870673] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:25:30.191 [2024-06-10 10:03:23.870685] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:30.191 [2024-06-10 10:03:23.870696] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:30.191 [2024-06-10 10:03:23.870708] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:30.191 [2024-06-10 10:03:23.870719] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:30.191 [2024-06-10 10:03:23.870731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:30.191 [2024-06-10 10:03:23.870742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:30.191 [2024-06-10 10:03:23.870753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:30.191 [2024-06-10 10:03:23.870763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:30.191 [2024-06-10 10:03:23.870775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.191 [2024-06-10 10:03:23.870787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:30.191 [2024-06-10 10:03:23.870799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:25:30.191 [2024-06-10 10:03:23.870823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.191 [2024-06-10 10:03:23.886290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.191 [2024-06-10 10:03:23.886340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:30.191 [2024-06-10 10:03:23.886372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.391 ms 00:25:30.191 [2024-06-10 10:03:23.886383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.191 [2024-06-10 10:03:23.886639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.191 [2024-06-10 10:03:23.886665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:30.191 [2024-06-10 10:03:23.886687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:25:30.192 [2024-06-10 10:03:23.886699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.192 [2024-06-10 10:03:23.929422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.192 [2024-06-10 10:03:23.929484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.192 [2024-06-10 10:03:23.929518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.192 [2024-06-10 10:03:23.929529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.192 [2024-06-10 10:03:23.929590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.192 [2024-06-10 10:03:23.929605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.192 [2024-06-10 10:03:23.929623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.192 [2024-06-10 10:03:23.929634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.192 [2024-06-10 10:03:23.929738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.192 [2024-06-10 10:03:23.929757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.192 [2024-06-10 10:03:23.929787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.192 [2024-06-10 10:03:23.929808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.192 [2024-06-10 10:03:23.929831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.192 [2024-06-10 10:03:23.929846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.192 [2024-06-10 10:03:23.929857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.192 [2024-06-10 10:03:23.929875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-06-10 10:03:24.025899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.450 [2024-06-10 10:03:24.025981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.450 [2024-06-10 10:03:24.026000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.450 [2024-06-10 10:03:24.026012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-06-10 10:03:24.061809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.450 [2024-06-10 10:03:24.061867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.450 [2024-06-10 10:03:24.061884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.450 [2024-06-10 10:03:24.061903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-06-10 10:03:24.061989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.450 [2024-06-10 10:03:24.062007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.450 [2024-06-10 10:03:24.062018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.450 [2024-06-10 10:03:24.062029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-06-10 10:03:24.062096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.450 [2024-06-10 10:03:24.062179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.450 [2024-06-10 10:03:24.062192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.450 [2024-06-10 10:03:24.062204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-06-10 10:03:24.062331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.451 [2024-06-10 10:03:24.062362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.451 [2024-06-10 10:03:24.062376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.451 [2024-06-10 10:03:24.062389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.451 [2024-06-10 10:03:24.062438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.451 [2024-06-10 10:03:24.062457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:30.451 [2024-06-10 10:03:24.062469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.451 [2024-06-10 10:03:24.062480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.451 [2024-06-10 10:03:24.062550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.451 [2024-06-10 10:03:24.062567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.451 [2024-06-10 10:03:24.062579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.451 [2024-06-10 10:03:24.062590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.451 [2024-06-10 10:03:24.062651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.451 [2024-06-10 10:03:24.062671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.451 [2024-06-10 10:03:24.062684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.451 [2024-06-10 10:03:24.062695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.451 [2024-06-10 10:03:24.062838] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.587 ms, result 0 00:25:31.387 00:25:31.387 00:25:31.387 10:03:25 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:33.917 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:33.917 10:03:27 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:33.917 10:03:27 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:33.917 10:03:27 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:33.917 10:03:27 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:33.917 10:03:27 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:33.917 10:03:27 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:33.917 10:03:27 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:33.917 10:03:27 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76331 00:25:33.917 10:03:27 -- common/autotest_common.sh@926 -- # '[' -z 76331 ']' 00:25:33.917 10:03:27 -- common/autotest_common.sh@930 -- # kill -0 76331 00:25:33.917 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (76331) - No such process 00:25:33.917 Process with pid 76331 is not found 00:25:33.917 10:03:27 -- common/autotest_common.sh@953 -- # echo 'Process with pid 76331 is not found' 00:25:33.917 10:03:27 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:34.176 10:03:27 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:34.176 Remove shared memory files 00:25:34.176 10:03:27 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:34.176 10:03:27 -- ftl/common.sh@205 -- # rm -f rm -f 00:25:34.176 10:03:27 -- ftl/common.sh@206 -- # rm -f rm -f 00:25:34.176 10:03:27 -- ftl/common.sh@207 -- # rm -f rm -f 00:25:34.176 10:03:27 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:34.176 10:03:27 -- ftl/common.sh@209 -- # rm -f rm -f 00:25:34.176 00:25:34.176 real 3m51.335s 00:25:34.176 user 4m26.400s 00:25:34.176 sys 0m36.432s 00:25:34.176 10:03:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.176 10:03:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.176 ************************************ 00:25:34.176 END TEST ftl_dirty_shutdown 00:25:34.176 ************************************ 00:25:34.176 10:03:27 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:34.176 10:03:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:34.176 10:03:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:34.176 10:03:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.176 ************************************ 00:25:34.176 START TEST ftl_upgrade_shutdown 00:25:34.176 ************************************ 00:25:34.176 10:03:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:34.436 * Looking for test storage... 00:25:34.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:34.436 10:03:27 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:34.436 10:03:27 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:34.436 10:03:27 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:34.436 10:03:27 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:34.436 10:03:27 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:34.436 10:03:27 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:34.436 10:03:27 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:34.436 10:03:27 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:34.436 10:03:27 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:34.436 10:03:27 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:34.436 10:03:27 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:34.436 10:03:27 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:34.436 10:03:27 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:34.436 10:03:27 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:34.436 10:03:27 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:34.436 10:03:27 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:34.436 10:03:27 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:34.436 10:03:27 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:34.436 10:03:27 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:34.436 10:03:27 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:34.436 10:03:27 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:34.436 10:03:27 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:34.436 10:03:27 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:34.436 10:03:27 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:34.436 10:03:27 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:34.436 10:03:27 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:34.436 10:03:27 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:34.436 10:03:27 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:34.436 10:03:27 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:34.436 10:03:27 -- ftl/common.sh@81 -- # local base_bdev= 00:25:34.436 10:03:27 -- ftl/common.sh@82 -- # local cache_bdev= 00:25:34.436 10:03:27 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:34.436 10:03:27 -- ftl/common.sh@89 -- # spdk_tgt_pid=78775 00:25:34.436 10:03:27 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:34.436 10:03:27 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:34.436 10:03:27 -- ftl/common.sh@91 -- # waitforlisten 78775 00:25:34.436 10:03:27 -- common/autotest_common.sh@819 -- # '[' -z 78775 ']' 00:25:34.436 10:03:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.436 10:03:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:34.436 10:03:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.436 10:03:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:34.436 10:03:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.436 [2024-06-10 10:03:28.096156] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:34.436 [2024-06-10 10:03:28.096545] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78775 ] 00:25:34.696 [2024-06-10 10:03:28.270057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.956 [2024-06-10 10:03:28.496208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:34.956 [2024-06-10 10:03:28.496544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.332 10:03:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:36.332 10:03:29 -- common/autotest_common.sh@852 -- # return 0 00:25:36.332 10:03:29 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:36.332 10:03:29 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:36.332 10:03:29 -- ftl/common.sh@99 -- # local params 00:25:36.332 10:03:29 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:36.332 10:03:29 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:36.332 10:03:29 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:36.332 10:03:29 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:25:36.332 10:03:29 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:36.332 10:03:29 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:36.332 10:03:29 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:36.332 10:03:29 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:25:36.332 10:03:29 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:36.332 10:03:29 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:36.332 10:03:29 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:36.332 10:03:29 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:36.332 10:03:29 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:25:36.332 10:03:29 -- ftl/common.sh@54 -- # local name=base 00:25:36.332 10:03:29 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:25:36.332 10:03:29 -- ftl/common.sh@56 -- # local size=20480 00:25:36.332 10:03:29 -- ftl/common.sh@59 -- # local base_bdev 00:25:36.332 10:03:29 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:25:36.332 10:03:30 -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:36.332 10:03:30 -- ftl/common.sh@62 -- # local base_size 00:25:36.332 10:03:30 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:36.332 10:03:30 -- common/autotest_common.sh@1357 -- # local bdev_name=basen1 00:25:36.332 10:03:30 -- common/autotest_common.sh@1358 -- # local bdev_info 00:25:36.332 10:03:30 -- common/autotest_common.sh@1359 -- # local bs 00:25:36.332 10:03:30 -- common/autotest_common.sh@1360 -- # local nb 00:25:36.332 10:03:30 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:36.899 10:03:30 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:25:36.899 { 00:25:36.899 "name": "basen1", 00:25:36.899 "aliases": [ 00:25:36.899 "37529680-52ea-4c0e-8e3f-387634c8e07f" 00:25:36.899 ], 00:25:36.899 "product_name": "NVMe disk", 00:25:36.899 "block_size": 4096, 00:25:36.899 "num_blocks": 1310720, 00:25:36.899 "uuid": "37529680-52ea-4c0e-8e3f-387634c8e07f", 00:25:36.899 "assigned_rate_limits": { 00:25:36.899 "rw_ios_per_sec": 0, 00:25:36.899 "rw_mbytes_per_sec": 0, 00:25:36.899 "r_mbytes_per_sec": 0, 00:25:36.899 "w_mbytes_per_sec": 0 00:25:36.899 }, 00:25:36.899 "claimed": true, 00:25:36.899 "claim_type": "read_many_write_one", 00:25:36.899 "zoned": false, 00:25:36.899 "supported_io_types": { 00:25:36.899 "read": true, 00:25:36.899 "write": true, 00:25:36.899 "unmap": true, 00:25:36.899 "write_zeroes": true, 00:25:36.899 "flush": true, 00:25:36.899 "reset": true, 00:25:36.899 "compare": true, 00:25:36.899 "compare_and_write": false, 00:25:36.899 "abort": true, 00:25:36.899 "nvme_admin": true, 00:25:36.899 "nvme_io": true 00:25:36.899 }, 00:25:36.899 "driver_specific": { 00:25:36.899 "nvme": [ 00:25:36.899 { 00:25:36.899 "pci_address": "0000:00:07.0", 00:25:36.899 "trid": { 00:25:36.899 "trtype": "PCIe", 00:25:36.899 "traddr": "0000:00:07.0" 00:25:36.899 }, 00:25:36.899 "ctrlr_data": { 00:25:36.899 "cntlid": 0, 00:25:36.899 "vendor_id": "0x1b36", 00:25:36.899 "model_number": "QEMU NVMe Ctrl", 00:25:36.899 "serial_number": "12341", 00:25:36.899 "firmware_revision": "8.0.0", 00:25:36.899 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:36.899 "oacs": { 00:25:36.899 "security": 0, 00:25:36.900 "format": 1, 00:25:36.900 "firmware": 0, 00:25:36.900 "ns_manage": 1 00:25:36.900 }, 00:25:36.900 "multi_ctrlr": false, 00:25:36.900 "ana_reporting": false 00:25:36.900 }, 00:25:36.900 "vs": { 00:25:36.900 "nvme_version": "1.4" 00:25:36.900 }, 00:25:36.900 "ns_data": { 00:25:36.900 "id": 1, 00:25:36.900 "can_share": false 00:25:36.900 } 00:25:36.900 } 00:25:36.900 ], 00:25:36.900 "mp_policy": "active_passive" 00:25:36.900 } 00:25:36.900 } 00:25:36.900 ]' 00:25:36.900 10:03:30 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:25:36.900 10:03:30 -- common/autotest_common.sh@1362 -- # bs=4096 00:25:36.900 10:03:30 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:25:36.900 10:03:30 -- common/autotest_common.sh@1363 -- # nb=1310720 00:25:36.900 10:03:30 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:25:36.900 10:03:30 -- common/autotest_common.sh@1367 -- # echo 5120 00:25:36.900 10:03:30 -- ftl/common.sh@63 -- # base_size=5120 00:25:36.900 10:03:30 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:36.900 10:03:30 -- ftl/common.sh@67 -- # clear_lvols 00:25:36.900 10:03:30 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:36.900 10:03:30 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:37.157 10:03:30 -- ftl/common.sh@28 -- # stores=ab7a4956-cfb6-486b-92b8-334a12dbc256 00:25:37.157 10:03:30 -- ftl/common.sh@29 -- # for lvs in $stores 00:25:37.158 10:03:30 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab7a4956-cfb6-486b-92b8-334a12dbc256 00:25:37.415 10:03:30 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:37.415 10:03:31 -- ftl/common.sh@68 -- # lvs=03adfffa-4b30-4710-9e31-63f1bf921fbd 00:25:37.415 10:03:31 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 03adfffa-4b30-4710-9e31-63f1bf921fbd 00:25:37.673 10:03:31 -- ftl/common.sh@107 -- # base_bdev=66779391-b313-4699-b622-cca3e705879d 00:25:37.673 10:03:31 -- ftl/common.sh@108 -- # [[ -z 66779391-b313-4699-b622-cca3e705879d ]] 00:25:37.673 10:03:31 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 66779391-b313-4699-b622-cca3e705879d 5120 00:25:37.673 10:03:31 -- ftl/common.sh@35 -- # local name=cache 00:25:37.673 10:03:31 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:25:37.673 10:03:31 -- ftl/common.sh@37 -- # local base_bdev=66779391-b313-4699-b622-cca3e705879d 00:25:37.673 10:03:31 -- ftl/common.sh@38 -- # local cache_size=5120 00:25:37.673 10:03:31 -- ftl/common.sh@41 -- # get_bdev_size 66779391-b313-4699-b622-cca3e705879d 00:25:37.673 10:03:31 -- common/autotest_common.sh@1357 -- # local bdev_name=66779391-b313-4699-b622-cca3e705879d 00:25:37.673 10:03:31 -- common/autotest_common.sh@1358 -- # local bdev_info 00:25:37.673 10:03:31 -- common/autotest_common.sh@1359 -- # local bs 00:25:37.673 10:03:31 -- common/autotest_common.sh@1360 -- # local nb 00:25:37.673 10:03:31 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 66779391-b313-4699-b622-cca3e705879d 00:25:37.931 10:03:31 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:25:37.931 { 00:25:37.931 "name": "66779391-b313-4699-b622-cca3e705879d", 00:25:37.931 "aliases": [ 00:25:37.931 "lvs/basen1p0" 00:25:37.931 ], 00:25:37.931 "product_name": "Logical Volume", 00:25:37.931 "block_size": 4096, 00:25:37.931 "num_blocks": 5242880, 00:25:37.931 "uuid": "66779391-b313-4699-b622-cca3e705879d", 00:25:37.931 "assigned_rate_limits": { 00:25:37.931 "rw_ios_per_sec": 0, 00:25:37.931 "rw_mbytes_per_sec": 0, 00:25:37.931 "r_mbytes_per_sec": 0, 00:25:37.931 "w_mbytes_per_sec": 0 00:25:37.931 }, 00:25:37.931 "claimed": false, 00:25:37.931 "zoned": false, 00:25:37.931 "supported_io_types": { 00:25:37.931 "read": true, 00:25:37.931 "write": true, 00:25:37.931 "unmap": true, 00:25:37.931 "write_zeroes": true, 00:25:37.931 "flush": false, 00:25:37.931 "reset": true, 00:25:37.931 "compare": false, 00:25:37.931 "compare_and_write": false, 00:25:37.931 "abort": false, 00:25:37.931 "nvme_admin": false, 00:25:37.931 "nvme_io": false 00:25:37.931 }, 00:25:37.931 "driver_specific": { 00:25:37.931 "lvol": { 00:25:37.931 "lvol_store_uuid": "03adfffa-4b30-4710-9e31-63f1bf921fbd", 00:25:37.931 "base_bdev": "basen1", 00:25:37.931 "thin_provision": true, 00:25:37.931 "snapshot": false, 00:25:37.931 "clone": false, 00:25:37.931 "esnap_clone": false 00:25:37.931 } 00:25:37.931 } 00:25:37.931 } 00:25:37.931 ]' 00:25:37.931 10:03:31 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:25:38.189 10:03:31 -- common/autotest_common.sh@1362 -- # bs=4096 00:25:38.189 10:03:31 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:25:38.189 10:03:31 -- common/autotest_common.sh@1363 -- # nb=5242880 00:25:38.189 10:03:31 -- common/autotest_common.sh@1366 -- # bdev_size=20480 00:25:38.189 10:03:31 -- common/autotest_common.sh@1367 -- # echo 20480 00:25:38.189 10:03:31 -- ftl/common.sh@41 -- # local base_size=1024 00:25:38.189 10:03:31 -- ftl/common.sh@44 -- # local nvc_bdev 00:25:38.189 10:03:31 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:25:38.449 10:03:32 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:38.449 10:03:32 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:38.449 10:03:32 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:38.715 10:03:32 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:38.715 10:03:32 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:38.715 10:03:32 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 66779391-b313-4699-b622-cca3e705879d -c cachen1p0 --l2p_dram_limit 2 00:25:38.992 [2024-06-10 10:03:32.522126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.522195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:38.992 [2024-06-10 10:03:32.522237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:38.992 [2024-06-10 10:03:32.522250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.522326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.522345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:38.992 [2024-06-10 10:03:32.522360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:25:38.992 [2024-06-10 10:03:32.522371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.522402] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:38.992 [2024-06-10 10:03:32.523420] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:38.992 [2024-06-10 10:03:32.523474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.523488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:38.992 [2024-06-10 10:03:32.523505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.066 ms 00:25:38.992 [2024-06-10 10:03:32.523517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.523679] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 4a4fca22-6c25-4c8b-9d9b-1a0085f88bac 00:25:38.992 [2024-06-10 10:03:32.524726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.524787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:38.992 [2024-06-10 10:03:32.524804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:38.992 [2024-06-10 10:03:32.524818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.529652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.529703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:38.992 [2024-06-10 10:03:32.529736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.776 ms 00:25:38.992 [2024-06-10 10:03:32.529749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.529807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.529844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:38.992 [2024-06-10 10:03:32.529856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:38.992 [2024-06-10 10:03:32.529872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.529952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.529973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:38.992 [2024-06-10 10:03:32.529985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:25:38.992 [2024-06-10 10:03:32.530001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.530052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:38.992 [2024-06-10 10:03:32.534605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.534661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:38.992 [2024-06-10 10:03:32.534699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.578 ms 00:25:38.992 [2024-06-10 10:03:32.534727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.534766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.534782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:38.992 [2024-06-10 10:03:32.534796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:38.992 [2024-06-10 10:03:32.534807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.534850] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:38.992 [2024-06-10 10:03:32.534982] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:25:38.992 [2024-06-10 10:03:32.535005] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:38.992 [2024-06-10 10:03:32.535020] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:25:38.992 [2024-06-10 10:03:32.535037] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:38.992 [2024-06-10 10:03:32.535050] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:38.992 [2024-06-10 10:03:32.535079] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:38.992 [2024-06-10 10:03:32.535089] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:38.992 [2024-06-10 10:03:32.535111] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:25:38.992 [2024-06-10 10:03:32.535126] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:25:38.992 [2024-06-10 10:03:32.535139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.535169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:38.992 [2024-06-10 10:03:32.535203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:25:38.992 [2024-06-10 10:03:32.535215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.535290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.992 [2024-06-10 10:03:32.535304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:38.992 [2024-06-10 10:03:32.535331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:25:38.992 [2024-06-10 10:03:32.535342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.992 [2024-06-10 10:03:32.535458] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:38.992 [2024-06-10 10:03:32.535476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:38.992 [2024-06-10 10:03:32.535491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:38.992 [2024-06-10 10:03:32.535503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.992 [2024-06-10 10:03:32.535517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:38.992 [2024-06-10 10:03:32.535528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:38.992 [2024-06-10 10:03:32.535541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:38.992 [2024-06-10 10:03:32.535552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:38.992 [2024-06-10 10:03:32.535565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:38.992 [2024-06-10 10:03:32.535575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.992 [2024-06-10 10:03:32.535588] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:38.992 [2024-06-10 10:03:32.535599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:38.992 [2024-06-10 10:03:32.535614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.992 [2024-06-10 10:03:32.535625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:38.992 [2024-06-10 10:03:32.535638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:25:38.992 [2024-06-10 10:03:32.535649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.992 [2024-06-10 10:03:32.535664] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:38.992 [2024-06-10 10:03:32.535674] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:25:38.992 [2024-06-10 10:03:32.535686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.992 [2024-06-10 10:03:32.535697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:25:38.992 [2024-06-10 10:03:32.535709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:25:38.992 [2024-06-10 10:03:32.535720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:25:38.992 [2024-06-10 10:03:32.535733] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:38.992 [2024-06-10 10:03:32.535745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:38.992 [2024-06-10 10:03:32.535758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:38.992 [2024-06-10 10:03:32.535768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:38.992 [2024-06-10 10:03:32.535781] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:25:38.992 [2024-06-10 10:03:32.535792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:38.992 [2024-06-10 10:03:32.535804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:38.992 [2024-06-10 10:03:32.535815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:38.992 [2024-06-10 10:03:32.535828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:38.992 [2024-06-10 10:03:32.535838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:38.992 [2024-06-10 10:03:32.535853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:25:38.992 [2024-06-10 10:03:32.535864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:38.992 [2024-06-10 10:03:32.535876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:38.992 [2024-06-10 10:03:32.535887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:38.992 [2024-06-10 10:03:32.535899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.993 [2024-06-10 10:03:32.535910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:38.993 [2024-06-10 10:03:32.535924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:25:38.993 [2024-06-10 10:03:32.535934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.993 [2024-06-10 10:03:32.535947] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:38.993 [2024-06-10 10:03:32.535958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:38.993 [2024-06-10 10:03:32.535971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:38.993 [2024-06-10 10:03:32.535983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.993 [2024-06-10 10:03:32.535997] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:38.993 [2024-06-10 10:03:32.536008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:38.993 [2024-06-10 10:03:32.536021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:38.993 [2024-06-10 10:03:32.536032] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:38.993 [2024-06-10 10:03:32.536047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:38.993 [2024-06-10 10:03:32.536058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:38.993 [2024-06-10 10:03:32.536072] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:38.993 [2024-06-10 10:03:32.536087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.993 [2024-06-10 10:03:32.536105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:38.993 [2024-06-10 10:03:32.536116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:25:38.993 [2024-06-10 10:03:32.536146] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:25:38.993 [2024-06-10 10:03:32.536161] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:25:38.993 [2024-06-10 10:03:32.536175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:25:38.993 [2024-06-10 10:03:32.536186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:25:38.993 [2024-06-10 10:03:32.536200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:25:38.993 [2024-06-10 10:03:32.536212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:25:38.993 [2024-06-10 10:03:32.536225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:25:38.993 [2024-06-10 10:03:32.536237] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:25:38.993 [2024-06-10 10:03:32.536250] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:25:38.993 [2024-06-10 10:03:32.536262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:25:38.993 [2024-06-10 10:03:32.536280] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:25:38.993 [2024-06-10 10:03:32.536292] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:38.993 [2024-06-10 10:03:32.536307] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.993 [2024-06-10 10:03:32.536319] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:38.993 [2024-06-10 10:03:32.536333] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:38.993 [2024-06-10 10:03:32.536345] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:38.993 [2024-06-10 10:03:32.536359] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:38.993 [2024-06-10 10:03:32.536372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.536386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:38.993 [2024-06-10 10:03:32.536399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.988 ms 00:25:38.993 [2024-06-10 10:03:32.536412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.554310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.554530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:38.993 [2024-06-10 10:03:32.554663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.841 ms 00:25:38.993 [2024-06-10 10:03:32.554722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.554857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.555023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:38.993 [2024-06-10 10:03:32.555188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:38.993 [2024-06-10 10:03:32.555343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.594174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.594387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:38.993 [2024-06-10 10:03:32.594510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 38.626 ms 00:25:38.993 [2024-06-10 10:03:32.594566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.594682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.594777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:38.993 [2024-06-10 10:03:32.594884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:38.993 [2024-06-10 10:03:32.594996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.595530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.595571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:38.993 [2024-06-10 10:03:32.595588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:25:38.993 [2024-06-10 10:03:32.595602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.595654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.595685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:38.993 [2024-06-10 10:03:32.595698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:38.993 [2024-06-10 10:03:32.595711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.613957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.614014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:38.993 [2024-06-10 10:03:32.614033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.220 ms 00:25:38.993 [2024-06-10 10:03:32.614048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.628112] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:38.993 [2024-06-10 10:03:32.628936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.628973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:38.993 [2024-06-10 10:03:32.628993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.735 ms 00:25:38.993 [2024-06-10 10:03:32.629005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.661279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.993 [2024-06-10 10:03:32.661340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:38.993 [2024-06-10 10:03:32.661379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 32.235 ms 00:25:38.993 [2024-06-10 10:03:32.661392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.993 [2024-06-10 10:03:32.661451] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:25:38.993 [2024-06-10 10:03:32.661471] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:25:43.255 [2024-06-10 10:03:36.245996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.246070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:43.255 [2024-06-10 10:03:36.246096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3584.564 ms 00:25:43.255 [2024-06-10 10:03:36.246135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.246237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.246254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:43.255 [2024-06-10 10:03:36.246270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:25:43.255 [2024-06-10 10:03:36.246282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.277288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.277357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:43.255 [2024-06-10 10:03:36.277399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.931 ms 00:25:43.255 [2024-06-10 10:03:36.277411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.309379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.309423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:43.255 [2024-06-10 10:03:36.309446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.929 ms 00:25:43.255 [2024-06-10 10:03:36.309458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.309880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.309905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:43.255 [2024-06-10 10:03:36.309922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.390 ms 00:25:43.255 [2024-06-10 10:03:36.309933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.398403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.398461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:43.255 [2024-06-10 10:03:36.398517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 88.402 ms 00:25:43.255 [2024-06-10 10:03:36.398530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.430391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.430444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:43.255 [2024-06-10 10:03:36.430480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.801 ms 00:25:43.255 [2024-06-10 10:03:36.430495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.432770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.432807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:25:43.255 [2024-06-10 10:03:36.432846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.225 ms 00:25:43.255 [2024-06-10 10:03:36.432858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.463799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.463855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:43.255 [2024-06-10 10:03:36.463891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.872 ms 00:25:43.255 [2024-06-10 10:03:36.463903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.463958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.463976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:43.255 [2024-06-10 10:03:36.463991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:43.255 [2024-06-10 10:03:36.464002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.464166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.255 [2024-06-10 10:03:36.464186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:43.255 [2024-06-10 10:03:36.464204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:25:43.255 [2024-06-10 10:03:36.464216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.255 [2024-06-10 10:03:36.465321] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3942.707 ms, result 0 00:25:43.255 { 00:25:43.255 "name": "ftl", 00:25:43.255 "uuid": "4a4fca22-6c25-4c8b-9d9b-1a0085f88bac" 00:25:43.255 } 00:25:43.255 10:03:36 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:43.255 [2024-06-10 10:03:36.732542] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.255 10:03:36 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:43.255 10:03:37 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:43.514 [2024-06-10 10:03:37.249231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:25:43.514 10:03:37 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:43.777 [2024-06-10 10:03:37.466645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:43.777 10:03:37 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:44.346 Fill FTL, iteration 1 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:44.346 10:03:37 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:44.346 10:03:37 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:44.346 10:03:37 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:44.346 10:03:37 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:44.346 10:03:37 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:44.346 10:03:37 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:44.346 10:03:37 -- ftl/common.sh@163 -- # spdk_ini_pid=78909 00:25:44.346 10:03:37 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:44.346 10:03:37 -- ftl/common.sh@165 -- # waitforlisten 78909 /var/tmp/spdk.tgt.sock 00:25:44.346 10:03:37 -- common/autotest_common.sh@819 -- # '[' -z 78909 ']' 00:25:44.346 10:03:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:44.346 10:03:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:44.347 10:03:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:44.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:44.347 10:03:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:44.347 10:03:37 -- common/autotest_common.sh@10 -- # set +x 00:25:44.347 [2024-06-10 10:03:37.943795] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:44.347 [2024-06-10 10:03:37.944192] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78909 ] 00:25:44.347 [2024-06-10 10:03:38.105278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.605 [2024-06-10 10:03:38.286147] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:44.605 [2024-06-10 10:03:38.286611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.980 10:03:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:45.980 10:03:39 -- common/autotest_common.sh@852 -- # return 0 00:25:45.980 10:03:39 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:46.238 ftln1 00:25:46.238 10:03:39 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:46.239 10:03:39 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:46.497 10:03:40 -- ftl/common.sh@173 -- # echo ']}' 00:25:46.497 10:03:40 -- ftl/common.sh@176 -- # killprocess 78909 00:25:46.497 10:03:40 -- common/autotest_common.sh@926 -- # '[' -z 78909 ']' 00:25:46.497 10:03:40 -- common/autotest_common.sh@930 -- # kill -0 78909 00:25:46.497 10:03:40 -- common/autotest_common.sh@931 -- # uname 00:25:46.497 10:03:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:46.497 10:03:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78909 00:25:46.497 killing process with pid 78909 00:25:46.497 10:03:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:46.497 10:03:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:46.497 10:03:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78909' 00:25:46.497 10:03:40 -- common/autotest_common.sh@945 -- # kill 78909 00:25:46.497 10:03:40 -- common/autotest_common.sh@950 -- # wait 78909 00:25:48.397 10:03:42 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:48.397 10:03:42 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:48.655 [2024-06-10 10:03:42.231374] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:48.655 [2024-06-10 10:03:42.232349] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78966 ] 00:25:48.655 [2024-06-10 10:03:42.404274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.913 [2024-06-10 10:03:42.587037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.477  Copying: 211/1024 [MB] (211 MBps) Copying: 427/1024 [MB] (216 MBps) Copying: 642/1024 [MB] (215 MBps) Copying: 853/1024 [MB] (211 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:25:55.477 00:25:55.477 Calculate MD5 checksum, iteration 1 00:25:55.477 10:03:48 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:55.477 10:03:48 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:55.477 10:03:48 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:55.477 10:03:48 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:55.477 10:03:48 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:55.477 10:03:48 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:55.477 10:03:48 -- ftl/common.sh@154 -- # return 0 00:25:55.477 10:03:48 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:55.477 [2024-06-10 10:03:49.045660] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:55.477 [2024-06-10 10:03:49.045827] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79035 ] 00:25:55.477 [2024-06-10 10:03:49.217020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.736 [2024-06-10 10:03:49.403993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.257  Copying: 509/1024 [MB] (509 MBps) Copying: 916/1024 [MB] (407 MBps) Copying: 1024/1024 [MB] (average 464 MBps) 00:25:59.257 00:25:59.515 10:03:53 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:59.515 10:03:53 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:02.045 10:03:55 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:02.045 10:03:55 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=bd2dbe8cc33fb17c53feee147924878f 00:26:02.045 10:03:55 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:02.045 10:03:55 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:02.045 10:03:55 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:02.045 Fill FTL, iteration 2 00:26:02.045 10:03:55 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:02.045 10:03:55 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:02.045 10:03:55 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:02.045 10:03:55 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:02.045 10:03:55 -- ftl/common.sh@154 -- # return 0 00:26:02.045 10:03:55 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:02.045 [2024-06-10 10:03:55.394664] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:02.045 [2024-06-10 10:03:55.394816] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79106 ] 00:26:02.045 [2024-06-10 10:03:55.564960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.045 [2024-06-10 10:03:55.756445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.650  Copying: 210/1024 [MB] (210 MBps) Copying: 420/1024 [MB] (210 MBps) Copying: 632/1024 [MB] (212 MBps) Copying: 840/1024 [MB] (208 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:26:08.650 00:26:08.650 10:04:02 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:08.650 Calculate MD5 checksum, iteration 2 00:26:08.650 10:04:02 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:08.650 10:04:02 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:08.650 10:04:02 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:08.650 10:04:02 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:08.650 10:04:02 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:08.650 10:04:02 -- ftl/common.sh@154 -- # return 0 00:26:08.650 10:04:02 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:08.650 [2024-06-10 10:04:02.233435] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:08.650 [2024-06-10 10:04:02.234483] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79181 ] 00:26:08.650 [2024-06-10 10:04:02.400432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.907 [2024-06-10 10:04:02.583684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.632  Copying: 517/1024 [MB] (517 MBps) Copying: 1024/1024 [MB] (average 514 MBps) 00:26:13.632 00:26:13.632 10:04:07 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:13.632 10:04:07 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:15.528 10:04:09 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:15.528 10:04:09 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=55934c02ff4f518788588811e9287a49 00:26:15.528 10:04:09 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:15.528 10:04:09 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:15.528 10:04:09 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:15.786 [2024-06-10 10:04:09.482519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.786 [2024-06-10 10:04:09.482583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:15.786 [2024-06-10 10:04:09.482605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:15.786 [2024-06-10 10:04:09.482617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.786 [2024-06-10 10:04:09.482659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.786 [2024-06-10 10:04:09.482675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:15.786 [2024-06-10 10:04:09.482688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:15.786 [2024-06-10 10:04:09.482699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.786 [2024-06-10 10:04:09.482734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.786 [2024-06-10 10:04:09.482748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:15.787 [2024-06-10 10:04:09.482760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:15.787 [2024-06-10 10:04:09.482771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.787 [2024-06-10 10:04:09.482874] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.330 ms, result 0 00:26:15.787 true 00:26:15.787 10:04:09 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:16.045 { 00:26:16.045 "name": "ftl", 00:26:16.045 "properties": [ 00:26:16.045 { 00:26:16.045 "name": "superblock_version", 00:26:16.045 "value": 5, 00:26:16.045 "read-only": true 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "name": "base_device", 00:26:16.045 "bands": [ 00:26:16.045 { 00:26:16.045 "id": 0, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 1, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 2, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 3, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 4, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 5, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 6, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 7, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 8, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 9, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 10, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 11, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 12, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 13, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 14, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 15, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 16, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 17, 00:26:16.045 "state": "FREE", 00:26:16.045 "validity": 0.0 00:26:16.045 } 00:26:16.045 ], 00:26:16.045 "read-only": true 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "name": "cache_device", 00:26:16.045 "type": "bdev", 00:26:16.045 "chunks": [ 00:26:16.045 { 00:26:16.045 "id": 0, 00:26:16.045 "state": "CLOSED", 00:26:16.045 "utilization": 1.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 1, 00:26:16.045 "state": "CLOSED", 00:26:16.045 "utilization": 1.0 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 2, 00:26:16.045 "state": "OPEN", 00:26:16.045 "utilization": 0.001953125 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "id": 3, 00:26:16.045 "state": "OPEN", 00:26:16.045 "utilization": 0.0 00:26:16.045 } 00:26:16.045 ], 00:26:16.045 "read-only": true 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "name": "verbose_mode", 00:26:16.045 "value": true, 00:26:16.045 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:16.045 }, 00:26:16.045 { 00:26:16.045 "name": "prep_upgrade_on_shutdown", 00:26:16.045 "value": false, 00:26:16.045 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:16.045 } 00:26:16.045 ] 00:26:16.045 } 00:26:16.045 10:04:09 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:16.304 [2024-06-10 10:04:09.935071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.304 [2024-06-10 10:04:09.935174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:16.304 [2024-06-10 10:04:09.935194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:16.304 [2024-06-10 10:04:09.935204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.304 [2024-06-10 10:04:09.935254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.304 [2024-06-10 10:04:09.935268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:16.304 [2024-06-10 10:04:09.935287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:16.304 [2024-06-10 10:04:09.935297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.304 [2024-06-10 10:04:09.935322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.304 [2024-06-10 10:04:09.935335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:16.304 [2024-06-10 10:04:09.935345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:16.304 [2024-06-10 10:04:09.935355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.304 [2024-06-10 10:04:09.935454] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.356 ms, result 0 00:26:16.304 true 00:26:16.304 10:04:09 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:16.304 10:04:09 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:16.304 10:04:09 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:16.562 10:04:10 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:16.562 10:04:10 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:16.562 10:04:10 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:16.820 [2024-06-10 10:04:10.375664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.820 [2024-06-10 10:04:10.376059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:16.820 [2024-06-10 10:04:10.376227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:16.820 [2024-06-10 10:04:10.376280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.820 [2024-06-10 10:04:10.376398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.820 [2024-06-10 10:04:10.376486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:16.820 [2024-06-10 10:04:10.376612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:16.820 [2024-06-10 10:04:10.376747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.820 [2024-06-10 10:04:10.376881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.820 [2024-06-10 10:04:10.376994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:16.820 [2024-06-10 10:04:10.377102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:16.820 [2024-06-10 10:04:10.377225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.820 [2024-06-10 10:04:10.377421] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.738 ms, result 0 00:26:16.820 true 00:26:16.820 10:04:10 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:17.078 { 00:26:17.078 "name": "ftl", 00:26:17.078 "properties": [ 00:26:17.078 { 00:26:17.078 "name": "superblock_version", 00:26:17.078 "value": 5, 00:26:17.078 "read-only": true 00:26:17.078 }, 00:26:17.079 { 00:26:17.079 "name": "base_device", 00:26:17.079 "bands": [ 00:26:17.079 { 00:26:17.079 "id": 0, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 1, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 2, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 3, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 4, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 5, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 6, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 7, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 8, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 9, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 10, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 11, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 12, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 13, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 14, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 15, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 16, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 17, 00:26:17.079 "state": "FREE", 00:26:17.079 "validity": 0.0 00:26:17.079 } 00:26:17.079 ], 00:26:17.079 "read-only": true 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "name": "cache_device", 00:26:17.079 "type": "bdev", 00:26:17.079 "chunks": [ 00:26:17.079 { 00:26:17.079 "id": 0, 00:26:17.079 "state": "CLOSED", 00:26:17.079 "utilization": 1.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 1, 00:26:17.079 "state": "CLOSED", 00:26:17.079 "utilization": 1.0 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 2, 00:26:17.079 "state": "OPEN", 00:26:17.079 "utilization": 0.001953125 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "id": 3, 00:26:17.079 "state": "OPEN", 00:26:17.079 "utilization": 0.0 00:26:17.079 } 00:26:17.079 ], 00:26:17.079 "read-only": true 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "name": "verbose_mode", 00:26:17.079 "value": true, 00:26:17.079 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:17.079 }, 00:26:17.079 { 00:26:17.079 "name": "prep_upgrade_on_shutdown", 00:26:17.079 "value": true, 00:26:17.079 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:17.079 } 00:26:17.079 ] 00:26:17.079 } 00:26:17.079 10:04:10 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:17.079 10:04:10 -- ftl/common.sh@130 -- # [[ -n 78775 ]] 00:26:17.079 10:04:10 -- ftl/common.sh@131 -- # killprocess 78775 00:26:17.079 10:04:10 -- common/autotest_common.sh@926 -- # '[' -z 78775 ']' 00:26:17.079 10:04:10 -- common/autotest_common.sh@930 -- # kill -0 78775 00:26:17.079 10:04:10 -- common/autotest_common.sh@931 -- # uname 00:26:17.079 10:04:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:17.079 10:04:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78775 00:26:17.079 killing process with pid 78775 00:26:17.079 10:04:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:17.079 10:04:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:17.079 10:04:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78775' 00:26:17.079 10:04:10 -- common/autotest_common.sh@945 -- # kill 78775 00:26:17.079 10:04:10 -- common/autotest_common.sh@950 -- # wait 78775 00:26:18.015 [2024-06-10 10:04:11.562218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:18.015 [2024-06-10 10:04:11.578647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.015 [2024-06-10 10:04:11.578725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:18.015 [2024-06-10 10:04:11.578763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:18.015 [2024-06-10 10:04:11.578775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.015 [2024-06-10 10:04:11.578812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:18.015 [2024-06-10 10:04:11.582183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.015 [2024-06-10 10:04:11.582213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:18.015 [2024-06-10 10:04:11.582244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.350 ms 00:26:18.015 [2024-06-10 10:04:11.582254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.063538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.063619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:28.007 [2024-06-10 10:04:20.063642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8481.307 ms 00:26:28.007 [2024-06-10 10:04:20.063655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.064958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.065002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:28.007 [2024-06-10 10:04:20.065018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.278 ms 00:26:28.007 [2024-06-10 10:04:20.065030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.066313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.066351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:28.007 [2024-06-10 10:04:20.066367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.231 ms 00:26:28.007 [2024-06-10 10:04:20.066379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.079180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.079221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:28.007 [2024-06-10 10:04:20.079238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.741 ms 00:26:28.007 [2024-06-10 10:04:20.079250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.087330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.087374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:28.007 [2024-06-10 10:04:20.087399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.037 ms 00:26:28.007 [2024-06-10 10:04:20.087411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.087522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.087542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:28.007 [2024-06-10 10:04:20.087555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:26:28.007 [2024-06-10 10:04:20.087566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.100166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.100206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:28.007 [2024-06-10 10:04:20.100222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.577 ms 00:26:28.007 [2024-06-10 10:04:20.100233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.112951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.112987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:28.007 [2024-06-10 10:04:20.113018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.678 ms 00:26:28.007 [2024-06-10 10:04:20.113028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.125785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.125820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:28.007 [2024-06-10 10:04:20.125851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.717 ms 00:26:28.007 [2024-06-10 10:04:20.125861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.138437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.138493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:28.007 [2024-06-10 10:04:20.138525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.501 ms 00:26:28.007 [2024-06-10 10:04:20.138536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.138577] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:28.007 [2024-06-10 10:04:20.138600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:28.007 [2024-06-10 10:04:20.138615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:28.007 [2024-06-10 10:04:20.138628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:28.007 [2024-06-10 10:04:20.138640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:28.007 [2024-06-10 10:04:20.138816] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:28.007 [2024-06-10 10:04:20.138827] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4a4fca22-6c25-4c8b-9d9b-1a0085f88bac 00:26:28.007 [2024-06-10 10:04:20.138857] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:28.007 [2024-06-10 10:04:20.138869] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:28.007 [2024-06-10 10:04:20.138879] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:28.007 [2024-06-10 10:04:20.138891] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:28.007 [2024-06-10 10:04:20.138902] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:28.007 [2024-06-10 10:04:20.138913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:28.007 [2024-06-10 10:04:20.138924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:28.007 [2024-06-10 10:04:20.138934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:28.007 [2024-06-10 10:04:20.138947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:28.007 [2024-06-10 10:04:20.138958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.138970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:28.007 [2024-06-10 10:04:20.138982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.383 ms 00:26:28.007 [2024-06-10 10:04:20.138994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.156041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.156130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:28.007 [2024-06-10 10:04:20.156149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.023 ms 00:26:28.007 [2024-06-10 10:04:20.156161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.156395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.007 [2024-06-10 10:04:20.156417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:28.007 [2024-06-10 10:04:20.156429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:26:28.007 [2024-06-10 10:04:20.156447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.213962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.007 [2024-06-10 10:04:20.214030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:28.007 [2024-06-10 10:04:20.214048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.007 [2024-06-10 10:04:20.214060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.007 [2024-06-10 10:04:20.214136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.214153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:28.008 [2024-06-10 10:04:20.214166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.214188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.214307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.214326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:28.008 [2024-06-10 10:04:20.214339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.214350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.214374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.214387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:28.008 [2024-06-10 10:04:20.214398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.214409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.317756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.317830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:28.008 [2024-06-10 10:04:20.317850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.317862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.357318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.357378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:28.008 [2024-06-10 10:04:20.357398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.357409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.357523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.357540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:28.008 [2024-06-10 10:04:20.357553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.357564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.357618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.357647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:28.008 [2024-06-10 10:04:20.357659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.357670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.357794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.357820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:28.008 [2024-06-10 10:04:20.357833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.357844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.357894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.357911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:28.008 [2024-06-10 10:04:20.357923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.357935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.357980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.358001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:28.008 [2024-06-10 10:04:20.358013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.358024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.358085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:28.008 [2024-06-10 10:04:20.358103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:28.008 [2024-06-10 10:04:20.358145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:28.008 [2024-06-10 10:04:20.358156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.008 [2024-06-10 10:04:20.358306] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8779.678 ms, result 0 00:26:29.911 10:04:23 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:29.911 10:04:23 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:29.911 10:04:23 -- ftl/common.sh@81 -- # local base_bdev= 00:26:29.911 10:04:23 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:29.911 10:04:23 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:29.911 10:04:23 -- ftl/common.sh@89 -- # spdk_tgt_pid=79402 00:26:29.911 10:04:23 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.911 10:04:23 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:29.911 10:04:23 -- ftl/common.sh@91 -- # waitforlisten 79402 00:26:29.911 10:04:23 -- common/autotest_common.sh@819 -- # '[' -z 79402 ']' 00:26:29.911 10:04:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.911 10:04:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:29.911 10:04:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.911 10:04:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:29.911 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:26:29.911 [2024-06-10 10:04:23.659404] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:29.911 [2024-06-10 10:04:23.659573] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79402 ] 00:26:30.169 [2024-06-10 10:04:23.826280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.427 [2024-06-10 10:04:24.025084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:30.427 [2024-06-10 10:04:24.025351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.363 [2024-06-10 10:04:24.829259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:31.363 [2024-06-10 10:04:24.829341] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:31.363 [2024-06-10 10:04:24.971352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:24.971419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:31.363 [2024-06-10 10:04:24.971453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:31.363 [2024-06-10 10:04:24.971466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:24.971550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:24.971580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:31.363 [2024-06-10 10:04:24.971593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:31.363 [2024-06-10 10:04:24.971609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:24.971645] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:31.363 [2024-06-10 10:04:24.972612] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:31.363 [2024-06-10 10:04:24.972646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:24.972664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:31.363 [2024-06-10 10:04:24.972677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.008 ms 00:26:31.363 [2024-06-10 10:04:24.972687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:24.973928] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:31.363 [2024-06-10 10:04:24.990261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:24.990323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:31.363 [2024-06-10 10:04:24.990345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.344 ms 00:26:31.363 [2024-06-10 10:04:24.990357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:24.990490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:24.990516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:31.363 [2024-06-10 10:04:24.990534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:26:31.363 [2024-06-10 10:04:24.990546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:24.995296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:24.995353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:31.363 [2024-06-10 10:04:24.995371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.610 ms 00:26:31.363 [2024-06-10 10:04:24.995382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:24.995478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:24.995501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:31.363 [2024-06-10 10:04:24.995514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:31.363 [2024-06-10 10:04:24.995525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:24.995599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:24.995618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:31.363 [2024-06-10 10:04:24.995630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:31.363 [2024-06-10 10:04:24.995640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:24.995685] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:31.363 [2024-06-10 10:04:25.000016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:25.000055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:31.363 [2024-06-10 10:04:25.000071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.347 ms 00:26:31.363 [2024-06-10 10:04:25.000082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:25.000141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:25.000160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:31.363 [2024-06-10 10:04:25.000173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:31.363 [2024-06-10 10:04:25.000185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:25.000234] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:31.363 [2024-06-10 10:04:25.000266] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:31.363 [2024-06-10 10:04:25.000309] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:31.363 [2024-06-10 10:04:25.000340] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:31.363 [2024-06-10 10:04:25.000437] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:31.363 [2024-06-10 10:04:25.000455] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:31.363 [2024-06-10 10:04:25.000470] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:31.363 [2024-06-10 10:04:25.000485] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:31.363 [2024-06-10 10:04:25.000498] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:31.363 [2024-06-10 10:04:25.000511] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:31.363 [2024-06-10 10:04:25.000522] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:31.363 [2024-06-10 10:04:25.000533] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:31.363 [2024-06-10 10:04:25.000560] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:31.363 [2024-06-10 10:04:25.000572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:25.000586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:31.363 [2024-06-10 10:04:25.000598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.341 ms 00:26:31.363 [2024-06-10 10:04:25.000609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:25.000686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.363 [2024-06-10 10:04:25.000701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:31.363 [2024-06-10 10:04:25.000713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:31.363 [2024-06-10 10:04:25.000724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.363 [2024-06-10 10:04:25.000846] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:31.363 [2024-06-10 10:04:25.000865] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:31.363 [2024-06-10 10:04:25.000883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:31.363 [2024-06-10 10:04:25.000895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:31.363 [2024-06-10 10:04:25.000906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:31.363 [2024-06-10 10:04:25.000917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:31.363 [2024-06-10 10:04:25.000927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:31.363 [2024-06-10 10:04:25.000937] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:31.363 [2024-06-10 10:04:25.000949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:31.363 [2024-06-10 10:04:25.000960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:31.363 [2024-06-10 10:04:25.000970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:31.363 [2024-06-10 10:04:25.000980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:31.363 [2024-06-10 10:04:25.000989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:31.363 [2024-06-10 10:04:25.001000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:31.363 [2024-06-10 10:04:25.001010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:31.363 [2024-06-10 10:04:25.001020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:31.363 [2024-06-10 10:04:25.001030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:31.363 [2024-06-10 10:04:25.001041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:31.363 [2024-06-10 10:04:25.001052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:31.363 [2024-06-10 10:04:25.001063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:31.363 [2024-06-10 10:04:25.001073] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:31.363 [2024-06-10 10:04:25.001083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:31.363 [2024-06-10 10:04:25.001099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:31.364 [2024-06-10 10:04:25.001138] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:31.364 [2024-06-10 10:04:25.001150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:31.364 [2024-06-10 10:04:25.001161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:31.364 [2024-06-10 10:04:25.001171] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:31.364 [2024-06-10 10:04:25.001183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:31.364 [2024-06-10 10:04:25.001193] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:31.364 [2024-06-10 10:04:25.001204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:31.364 [2024-06-10 10:04:25.001214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:31.364 [2024-06-10 10:04:25.001224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:31.364 [2024-06-10 10:04:25.001234] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:31.364 [2024-06-10 10:04:25.001247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:31.364 [2024-06-10 10:04:25.001257] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:31.364 [2024-06-10 10:04:25.001268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:31.364 [2024-06-10 10:04:25.001278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:31.364 [2024-06-10 10:04:25.001288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:31.364 [2024-06-10 10:04:25.001298] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:31.364 [2024-06-10 10:04:25.001307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:31.364 [2024-06-10 10:04:25.001317] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:31.364 [2024-06-10 10:04:25.001329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:31.364 [2024-06-10 10:04:25.001339] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:31.364 [2024-06-10 10:04:25.001350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:31.364 [2024-06-10 10:04:25.001361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:31.364 [2024-06-10 10:04:25.001372] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:31.364 [2024-06-10 10:04:25.001382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:31.364 [2024-06-10 10:04:25.001392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:31.364 [2024-06-10 10:04:25.001401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:31.364 [2024-06-10 10:04:25.001412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:31.364 [2024-06-10 10:04:25.001425] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:31.364 [2024-06-10 10:04:25.001439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:31.364 [2024-06-10 10:04:25.001452] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:31.364 [2024-06-10 10:04:25.001464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:31.364 [2024-06-10 10:04:25.001475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:31.364 [2024-06-10 10:04:25.001486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:31.364 [2024-06-10 10:04:25.001497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:31.364 [2024-06-10 10:04:25.001509] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:31.364 [2024-06-10 10:04:25.001520] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:31.364 [2024-06-10 10:04:25.001531] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:31.364 [2024-06-10 10:04:25.001543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:31.364 [2024-06-10 10:04:25.001554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:31.364 [2024-06-10 10:04:25.001581] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:31.364 [2024-06-10 10:04:25.001593] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:31.364 [2024-06-10 10:04:25.001606] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:31.364 [2024-06-10 10:04:25.001617] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:31.364 [2024-06-10 10:04:25.001630] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:31.364 [2024-06-10 10:04:25.001647] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:31.364 [2024-06-10 10:04:25.001659] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:31.364 [2024-06-10 10:04:25.001670] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:31.364 [2024-06-10 10:04:25.001682] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:31.364 [2024-06-10 10:04:25.001695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.364 [2024-06-10 10:04:25.001707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:31.364 [2024-06-10 10:04:25.001718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.894 ms 00:26:31.364 [2024-06-10 10:04:25.001730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.364 [2024-06-10 10:04:25.020011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.364 [2024-06-10 10:04:25.020086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:31.364 [2024-06-10 10:04:25.020129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.211 ms 00:26:31.364 [2024-06-10 10:04:25.020146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.364 [2024-06-10 10:04:25.020235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.364 [2024-06-10 10:04:25.020255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:31.364 [2024-06-10 10:04:25.020280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:31.364 [2024-06-10 10:04:25.020294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.364 [2024-06-10 10:04:25.060691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.364 [2024-06-10 10:04:25.060755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:31.364 [2024-06-10 10:04:25.060781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 40.273 ms 00:26:31.364 [2024-06-10 10:04:25.060792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.364 [2024-06-10 10:04:25.060876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.364 [2024-06-10 10:04:25.060894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:31.364 [2024-06-10 10:04:25.060907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:31.364 [2024-06-10 10:04:25.060919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.364 [2024-06-10 10:04:25.061316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.364 [2024-06-10 10:04:25.061337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:31.364 [2024-06-10 10:04:25.061351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:26:31.364 [2024-06-10 10:04:25.061368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.364 [2024-06-10 10:04:25.061428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.364 [2024-06-10 10:04:25.061446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:31.364 [2024-06-10 10:04:25.061459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:31.364 [2024-06-10 10:04:25.061470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.364 [2024-06-10 10:04:25.080255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.364 [2024-06-10 10:04:25.080320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:31.364 [2024-06-10 10:04:25.080342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.751 ms 00:26:31.364 [2024-06-10 10:04:25.080354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.364 [2024-06-10 10:04:25.097457] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:31.364 [2024-06-10 10:04:25.097531] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:31.364 [2024-06-10 10:04:25.097552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.364 [2024-06-10 10:04:25.097565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:31.364 [2024-06-10 10:04:25.097581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.005 ms 00:26:31.364 [2024-06-10 10:04:25.097592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.364 [2024-06-10 10:04:25.116496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.365 [2024-06-10 10:04:25.116575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:31.365 [2024-06-10 10:04:25.116613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.819 ms 00:26:31.365 [2024-06-10 10:04:25.116625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.623 [2024-06-10 10:04:25.133027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.623 [2024-06-10 10:04:25.133095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:31.623 [2024-06-10 10:04:25.133133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.313 ms 00:26:31.623 [2024-06-10 10:04:25.133147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.623 [2024-06-10 10:04:25.149100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.623 [2024-06-10 10:04:25.149176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:31.623 [2024-06-10 10:04:25.149196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.873 ms 00:26:31.623 [2024-06-10 10:04:25.149207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.623 [2024-06-10 10:04:25.149744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.623 [2024-06-10 10:04:25.149787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:31.623 [2024-06-10 10:04:25.149805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.382 ms 00:26:31.623 [2024-06-10 10:04:25.149817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.228021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.624 [2024-06-10 10:04:25.228095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:31.624 [2024-06-10 10:04:25.228132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 78.163 ms 00:26:31.624 [2024-06-10 10:04:25.228146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.240927] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:31.624 [2024-06-10 10:04:25.241798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.624 [2024-06-10 10:04:25.241835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:31.624 [2024-06-10 10:04:25.241853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.557 ms 00:26:31.624 [2024-06-10 10:04:25.241865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.241981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.624 [2024-06-10 10:04:25.242012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:31.624 [2024-06-10 10:04:25.242026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:31.624 [2024-06-10 10:04:25.242038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.242129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.624 [2024-06-10 10:04:25.242150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:31.624 [2024-06-10 10:04:25.242163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:31.624 [2024-06-10 10:04:25.242174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.244167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.624 [2024-06-10 10:04:25.244210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:31.624 [2024-06-10 10:04:25.244230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.960 ms 00:26:31.624 [2024-06-10 10:04:25.244242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.244284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.624 [2024-06-10 10:04:25.244301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:31.624 [2024-06-10 10:04:25.244316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:31.624 [2024-06-10 10:04:25.244327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.244375] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:31.624 [2024-06-10 10:04:25.244393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.624 [2024-06-10 10:04:25.244405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:31.624 [2024-06-10 10:04:25.244417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:31.624 [2024-06-10 10:04:25.244433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.275731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.624 [2024-06-10 10:04:25.275800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:31.624 [2024-06-10 10:04:25.275821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.268 ms 00:26:31.624 [2024-06-10 10:04:25.275833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.275936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.624 [2024-06-10 10:04:25.275956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:31.624 [2024-06-10 10:04:25.275978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:31.624 [2024-06-10 10:04:25.275989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.624 [2024-06-10 10:04:25.277173] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 305.316 ms, result 0 00:26:31.624 [2024-06-10 10:04:25.292201] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.624 [2024-06-10 10:04:25.308256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:31.624 [2024-06-10 10:04:25.317270] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:32.274 10:04:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:32.274 10:04:26 -- common/autotest_common.sh@852 -- # return 0 00:26:32.274 10:04:26 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:32.274 10:04:26 -- ftl/common.sh@95 -- # return 0 00:26:32.274 10:04:26 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:32.533 [2024-06-10 10:04:26.279575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.533 [2024-06-10 10:04:26.279666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:32.533 [2024-06-10 10:04:26.279700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:32.533 [2024-06-10 10:04:26.279724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.533 [2024-06-10 10:04:26.279783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.533 [2024-06-10 10:04:26.279811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:32.533 [2024-06-10 10:04:26.279834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:32.533 [2024-06-10 10:04:26.279856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.533 [2024-06-10 10:04:26.279909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.533 [2024-06-10 10:04:26.279934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:32.533 [2024-06-10 10:04:26.279957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:32.533 [2024-06-10 10:04:26.279985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.533 [2024-06-10 10:04:26.280133] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.513 ms, result 0 00:26:32.533 true 00:26:32.791 10:04:26 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:33.050 { 00:26:33.050 "name": "ftl", 00:26:33.050 "properties": [ 00:26:33.050 { 00:26:33.050 "name": "superblock_version", 00:26:33.050 "value": 5, 00:26:33.050 "read-only": true 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "name": "base_device", 00:26:33.050 "bands": [ 00:26:33.050 { 00:26:33.050 "id": 0, 00:26:33.050 "state": "CLOSED", 00:26:33.050 "validity": 1.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 1, 00:26:33.050 "state": "CLOSED", 00:26:33.050 "validity": 1.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 2, 00:26:33.050 "state": "CLOSED", 00:26:33.050 "validity": 0.007843137254901933 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 3, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 4, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 5, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 6, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 7, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 8, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 9, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 10, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 11, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 12, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 13, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 14, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 15, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 16, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 17, 00:26:33.050 "state": "FREE", 00:26:33.050 "validity": 0.0 00:26:33.050 } 00:26:33.050 ], 00:26:33.050 "read-only": true 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "name": "cache_device", 00:26:33.050 "type": "bdev", 00:26:33.050 "chunks": [ 00:26:33.050 { 00:26:33.050 "id": 0, 00:26:33.050 "state": "OPEN", 00:26:33.050 "utilization": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 1, 00:26:33.050 "state": "OPEN", 00:26:33.050 "utilization": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 2, 00:26:33.050 "state": "FREE", 00:26:33.050 "utilization": 0.0 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "id": 3, 00:26:33.050 "state": "FREE", 00:26:33.050 "utilization": 0.0 00:26:33.050 } 00:26:33.050 ], 00:26:33.050 "read-only": true 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "name": "verbose_mode", 00:26:33.050 "value": true, 00:26:33.050 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:33.050 }, 00:26:33.050 { 00:26:33.050 "name": "prep_upgrade_on_shutdown", 00:26:33.050 "value": false, 00:26:33.050 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:33.050 } 00:26:33.050 ] 00:26:33.050 } 00:26:33.050 10:04:26 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:33.050 10:04:26 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:33.050 10:04:26 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:33.309 10:04:26 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:33.309 10:04:26 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:33.309 10:04:26 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:33.309 10:04:26 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:33.309 10:04:26 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:33.570 Validate MD5 checksum, iteration 1 00:26:33.570 10:04:27 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:33.570 10:04:27 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:33.570 10:04:27 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:33.570 10:04:27 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:33.570 10:04:27 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:33.570 10:04:27 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:33.570 10:04:27 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:33.570 10:04:27 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:33.570 10:04:27 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:33.570 10:04:27 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:33.570 10:04:27 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:33.570 10:04:27 -- ftl/common.sh@154 -- # return 0 00:26:33.570 10:04:27 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:33.570 [2024-06-10 10:04:27.273583] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:33.570 [2024-06-10 10:04:27.273972] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79450 ] 00:26:33.828 [2024-06-10 10:04:27.443432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.086 [2024-06-10 10:04:27.641513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.711  Copying: 437/1024 [MB] (437 MBps) Copying: 884/1024 [MB] (447 MBps) Copying: 1024/1024 [MB] (average 439 MBps) 00:26:38.711 00:26:38.711 10:04:32 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:38.711 10:04:32 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:40.616 10:04:34 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:40.616 10:04:34 -- ftl/upgrade_shutdown.sh@103 -- # sum=bd2dbe8cc33fb17c53feee147924878f 00:26:40.616 10:04:34 -- ftl/upgrade_shutdown.sh@105 -- # [[ bd2dbe8cc33fb17c53feee147924878f != \b\d\2\d\b\e\8\c\c\3\3\f\b\1\7\c\5\3\f\e\e\e\1\4\7\9\2\4\8\7\8\f ]] 00:26:40.616 Validate MD5 checksum, iteration 2 00:26:40.616 10:04:34 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:40.616 10:04:34 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:40.616 10:04:34 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:40.616 10:04:34 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:40.616 10:04:34 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:40.616 10:04:34 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:40.616 10:04:34 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:40.616 10:04:34 -- ftl/common.sh@154 -- # return 0 00:26:40.616 10:04:34 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:40.874 [2024-06-10 10:04:34.431814] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:40.874 [2024-06-10 10:04:34.432163] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79529 ] 00:26:40.874 [2024-06-10 10:04:34.592013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.132 [2024-06-10 10:04:34.793736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.874  Copying: 504/1024 [MB] (504 MBps) Copying: 965/1024 [MB] (461 MBps) Copying: 1024/1024 [MB] (average 477 MBps) 00:26:45.874 00:26:45.874 10:04:39 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:45.874 10:04:39 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:48.455 10:04:41 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:48.455 10:04:41 -- ftl/upgrade_shutdown.sh@103 -- # sum=55934c02ff4f518788588811e9287a49 00:26:48.455 10:04:41 -- ftl/upgrade_shutdown.sh@105 -- # [[ 55934c02ff4f518788588811e9287a49 != \5\5\9\3\4\c\0\2\f\f\4\f\5\1\8\7\8\8\5\8\8\8\1\1\e\9\2\8\7\a\4\9 ]] 00:26:48.455 10:04:41 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:48.455 10:04:41 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:48.455 10:04:41 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:48.455 10:04:41 -- ftl/common.sh@137 -- # [[ -n 79402 ]] 00:26:48.455 10:04:41 -- ftl/common.sh@138 -- # kill -9 79402 00:26:48.455 10:04:41 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:48.455 10:04:41 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:48.455 10:04:41 -- ftl/common.sh@81 -- # local base_bdev= 00:26:48.455 10:04:41 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:48.455 10:04:41 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:48.455 10:04:41 -- ftl/common.sh@89 -- # spdk_tgt_pid=79608 00:26:48.455 10:04:41 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:48.455 10:04:41 -- ftl/common.sh@91 -- # waitforlisten 79608 00:26:48.455 10:04:41 -- common/autotest_common.sh@819 -- # '[' -z 79608 ']' 00:26:48.455 10:04:41 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:48.455 10:04:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.455 10:04:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:48.455 10:04:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.455 10:04:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:48.455 10:04:41 -- common/autotest_common.sh@10 -- # set +x 00:26:48.455 [2024-06-10 10:04:41.728088] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:48.455 [2024-06-10 10:04:41.728447] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79608 ] 00:26:48.455 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 818: 79402 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:48.455 [2024-06-10 10:04:41.904339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.455 [2024-06-10 10:04:42.138894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:48.455 [2024-06-10 10:04:42.139479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.389 [2024-06-10 10:04:42.957448] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:49.389 [2024-06-10 10:04:42.957724] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:49.389 [2024-06-10 10:04:43.100156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.100341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:49.389 [2024-06-10 10:04:43.100470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:49.389 [2024-06-10 10:04:43.100523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.100721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.100798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:49.389 [2024-06-10 10:04:43.100992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:26:49.389 [2024-06-10 10:04:43.101054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.101141] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:49.389 [2024-06-10 10:04:43.102240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:49.389 [2024-06-10 10:04:43.102428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.102559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:49.389 [2024-06-10 10:04:43.102612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:26:49.389 [2024-06-10 10:04:43.102711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.103358] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:49.389 [2024-06-10 10:04:43.126090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.126213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:49.389 [2024-06-10 10:04:43.126249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.735 ms 00:26:49.389 [2024-06-10 10:04:43.126261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.140060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.140115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:49.389 [2024-06-10 10:04:43.140134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:49.389 [2024-06-10 10:04:43.140146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.140694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.140722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:49.389 [2024-06-10 10:04:43.140736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.446 ms 00:26:49.389 [2024-06-10 10:04:43.140748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.140796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.140814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:49.389 [2024-06-10 10:04:43.140828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:49.389 [2024-06-10 10:04:43.140838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.140880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.140896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:49.389 [2024-06-10 10:04:43.140908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:49.389 [2024-06-10 10:04:43.140919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.140951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:49.389 [2024-06-10 10:04:43.144991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.145031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:49.389 [2024-06-10 10:04:43.145047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.050 ms 00:26:49.389 [2024-06-10 10:04:43.145059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.145126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.145145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:49.389 [2024-06-10 10:04:43.145159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:49.389 [2024-06-10 10:04:43.145170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.145222] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:49.389 [2024-06-10 10:04:43.145253] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:49.389 [2024-06-10 10:04:43.145294] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:49.389 [2024-06-10 10:04:43.145324] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:49.389 [2024-06-10 10:04:43.145410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:49.389 [2024-06-10 10:04:43.145426] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:49.389 [2024-06-10 10:04:43.145441] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:49.389 [2024-06-10 10:04:43.145464] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:49.389 [2024-06-10 10:04:43.145477] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:49.389 [2024-06-10 10:04:43.145489] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:49.389 [2024-06-10 10:04:43.145500] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:49.389 [2024-06-10 10:04:43.145511] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:49.389 [2024-06-10 10:04:43.145521] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:49.389 [2024-06-10 10:04:43.145532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.145544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:49.389 [2024-06-10 10:04:43.145556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:26:49.389 [2024-06-10 10:04:43.145567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.145648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.389 [2024-06-10 10:04:43.145668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:49.389 [2024-06-10 10:04:43.145680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:26:49.389 [2024-06-10 10:04:43.145690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.389 [2024-06-10 10:04:43.145783] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:49.389 [2024-06-10 10:04:43.145810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:49.389 [2024-06-10 10:04:43.145824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:49.389 [2024-06-10 10:04:43.145836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:49.389 [2024-06-10 10:04:43.145847] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:49.389 [2024-06-10 10:04:43.145861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:49.389 [2024-06-10 10:04:43.145880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:49.389 [2024-06-10 10:04:43.145893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:49.389 [2024-06-10 10:04:43.145904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:49.389 [2024-06-10 10:04:43.145914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:49.389 [2024-06-10 10:04:43.145924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:49.389 [2024-06-10 10:04:43.145938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:49.389 [2024-06-10 10:04:43.145955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:49.389 [2024-06-10 10:04:43.145966] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:49.389 [2024-06-10 10:04:43.145977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:49.389 [2024-06-10 10:04:43.145988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:49.389 [2024-06-10 10:04:43.146000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:49.390 [2024-06-10 10:04:43.146011] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:49.390 [2024-06-10 10:04:43.146022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:49.390 [2024-06-10 10:04:43.146040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:49.390 [2024-06-10 10:04:43.146057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:49.390 [2024-06-10 10:04:43.146069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:49.390 [2024-06-10 10:04:43.146079] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:49.390 [2024-06-10 10:04:43.146089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:49.390 [2024-06-10 10:04:43.146099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:49.390 [2024-06-10 10:04:43.146133] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:49.390 [2024-06-10 10:04:43.146146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:49.390 [2024-06-10 10:04:43.146156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:49.390 [2024-06-10 10:04:43.146167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:49.390 [2024-06-10 10:04:43.146177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:49.390 [2024-06-10 10:04:43.146187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:49.390 [2024-06-10 10:04:43.146198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:49.390 [2024-06-10 10:04:43.146216] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:49.390 [2024-06-10 10:04:43.146233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:49.390 [2024-06-10 10:04:43.146245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:49.390 [2024-06-10 10:04:43.146255] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:49.390 [2024-06-10 10:04:43.146265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:49.390 [2024-06-10 10:04:43.146275] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:49.390 [2024-06-10 10:04:43.146291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:49.390 [2024-06-10 10:04:43.146306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:49.390 [2024-06-10 10:04:43.146316] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:49.390 [2024-06-10 10:04:43.146327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:49.390 [2024-06-10 10:04:43.146344] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:49.390 [2024-06-10 10:04:43.146355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:49.390 [2024-06-10 10:04:43.146368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:49.390 [2024-06-10 10:04:43.146386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:49.390 [2024-06-10 10:04:43.146400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:49.390 [2024-06-10 10:04:43.146411] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:49.390 [2024-06-10 10:04:43.146422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:49.390 [2024-06-10 10:04:43.146433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:49.390 [2024-06-10 10:04:43.146444] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:49.390 [2024-06-10 10:04:43.146466] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:49.390 [2024-06-10 10:04:43.146483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:49.390 [2024-06-10 10:04:43.146495] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:49.390 [2024-06-10 10:04:43.146506] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:49.390 [2024-06-10 10:04:43.146517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:49.390 [2024-06-10 10:04:43.146528] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:49.390 [2024-06-10 10:04:43.146540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:49.390 [2024-06-10 10:04:43.146559] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:49.390 [2024-06-10 10:04:43.146574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:49.390 [2024-06-10 10:04:43.146599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:49.390 [2024-06-10 10:04:43.146610] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:49.390 [2024-06-10 10:04:43.146623] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:49.390 [2024-06-10 10:04:43.146643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:49.390 [2024-06-10 10:04:43.146656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:49.390 [2024-06-10 10:04:43.146667] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:49.390 [2024-06-10 10:04:43.146680] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:49.390 [2024-06-10 10:04:43.146692] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:49.390 [2024-06-10 10:04:43.146704] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:49.390 [2024-06-10 10:04:43.146715] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:49.390 [2024-06-10 10:04:43.146731] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:49.390 [2024-06-10 10:04:43.146753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.390 [2024-06-10 10:04:43.146766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:49.390 [2024-06-10 10:04:43.146777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.018 ms 00:26:49.390 [2024-06-10 10:04:43.146788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.649 [2024-06-10 10:04:43.164122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.649 [2024-06-10 10:04:43.164165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:49.649 [2024-06-10 10:04:43.164183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.265 ms 00:26:49.649 [2024-06-10 10:04:43.164195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.649 [2024-06-10 10:04:43.164248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.649 [2024-06-10 10:04:43.164270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:49.649 [2024-06-10 10:04:43.164282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:49.649 [2024-06-10 10:04:43.164293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.649 [2024-06-10 10:04:43.206630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.206683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:49.650 [2024-06-10 10:04:43.206702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 42.265 ms 00:26:49.650 [2024-06-10 10:04:43.206714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.206785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.206802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:49.650 [2024-06-10 10:04:43.206821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:49.650 [2024-06-10 10:04:43.206832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.206976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.207001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:49.650 [2024-06-10 10:04:43.207014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:26:49.650 [2024-06-10 10:04:43.207026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.207081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.207097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:49.650 [2024-06-10 10:04:43.207125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:49.650 [2024-06-10 10:04:43.207143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.227469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.227521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:49.650 [2024-06-10 10:04:43.227539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.293 ms 00:26:49.650 [2024-06-10 10:04:43.227557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.227730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.227766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:49.650 [2024-06-10 10:04:43.227780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:49.650 [2024-06-10 10:04:43.227791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.250668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.250729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:49.650 [2024-06-10 10:04:43.250777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.848 ms 00:26:49.650 [2024-06-10 10:04:43.250789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.264333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.264395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:49.650 [2024-06-10 10:04:43.264427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:26:49.650 [2024-06-10 10:04:43.264439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.348388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.348461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:49.650 [2024-06-10 10:04:43.348482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 83.876 ms 00:26:49.650 [2024-06-10 10:04:43.348495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.348613] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:49.650 [2024-06-10 10:04:43.348697] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:49.650 [2024-06-10 10:04:43.348744] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:49.650 [2024-06-10 10:04:43.348791] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:49.650 [2024-06-10 10:04:43.348804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.348816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:49.650 [2024-06-10 10:04:43.348828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.237 ms 00:26:49.650 [2024-06-10 10:04:43.348843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.348932] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:49.650 [2024-06-10 10:04:43.348959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.348971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:49.650 [2024-06-10 10:04:43.348983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:49.650 [2024-06-10 10:04:43.348994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.369939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.369993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:49.650 [2024-06-10 10:04:43.370011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.912 ms 00:26:49.650 [2024-06-10 10:04:43.370024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.382378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.382421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:49.650 [2024-06-10 10:04:43.382438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:49.650 [2024-06-10 10:04:43.382449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.382520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:49.650 [2024-06-10 10:04:43.382539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:26:49.650 [2024-06-10 10:04:43.382558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:49.650 [2024-06-10 10:04:43.382570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:49.650 [2024-06-10 10:04:43.382716] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:26:50.217 [2024-06-10 10:04:43.914368] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:26:50.217 [2024-06-10 10:04:43.914606] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:26:50.784 [2024-06-10 10:04:44.436792] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:26:50.784 [2024-06-10 10:04:44.436928] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:50.784 [2024-06-10 10:04:44.436954] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:50.784 [2024-06-10 10:04:44.436970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.436983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:50.784 [2024-06-10 10:04:44.437001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1054.379 ms 00:26:50.784 [2024-06-10 10:04:44.437023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.437074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.437099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:50.784 [2024-06-10 10:04:44.437151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:50.784 [2024-06-10 10:04:44.437163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.451263] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:50.784 [2024-06-10 10:04:44.451470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.451498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:50.784 [2024-06-10 10:04:44.451513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.273 ms 00:26:50.784 [2024-06-10 10:04:44.451526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.452422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.452459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:26:50.784 [2024-06-10 10:04:44.452474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.758 ms 00:26:50.784 [2024-06-10 10:04:44.452492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.455284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.455330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:50.784 [2024-06-10 10:04:44.455347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.763 ms 00:26:50.784 [2024-06-10 10:04:44.455359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.493544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.493601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:26:50.784 [2024-06-10 10:04:44.493650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 38.120 ms 00:26:50.784 [2024-06-10 10:04:44.493664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.493873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.493921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:50.784 [2024-06-10 10:04:44.493933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:26:50.784 [2024-06-10 10:04:44.493960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.496302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.496345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:50.784 [2024-06-10 10:04:44.496391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.292 ms 00:26:50.784 [2024-06-10 10:04:44.496421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.496476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.496490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:50.784 [2024-06-10 10:04:44.496517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:50.784 [2024-06-10 10:04:44.496527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.496601] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:50.784 [2024-06-10 10:04:44.496618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.496628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:50.784 [2024-06-10 10:04:44.496640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:50.784 [2024-06-10 10:04:44.496650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.496720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.784 [2024-06-10 10:04:44.496735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:50.784 [2024-06-10 10:04:44.496748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:50.784 [2024-06-10 10:04:44.496758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.784 [2024-06-10 10:04:44.498081] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1397.330 ms, result 0 00:26:50.784 [2024-06-10 10:04:44.510651] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.784 [2024-06-10 10:04:44.526513] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:50.784 [2024-06-10 10:04:44.535885] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:51.350 Validate MD5 checksum, iteration 1 00:26:51.350 10:04:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:51.350 10:04:45 -- common/autotest_common.sh@852 -- # return 0 00:26:51.350 10:04:45 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:51.350 10:04:45 -- ftl/common.sh@95 -- # return 0 00:26:51.350 10:04:45 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:51.350 10:04:45 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:51.350 10:04:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:51.350 10:04:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:51.350 10:04:45 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:51.350 10:04:45 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:51.350 10:04:45 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:51.350 10:04:45 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:51.350 10:04:45 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:51.350 10:04:45 -- ftl/common.sh@154 -- # return 0 00:26:51.350 10:04:45 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:51.350 [2024-06-10 10:04:45.102745] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:51.350 [2024-06-10 10:04:45.103141] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79654 ] 00:26:51.608 [2024-06-10 10:04:45.274992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.866 [2024-06-10 10:04:45.499386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.603  Copying: 482/1024 [MB] (482 MBps) Copying: 928/1024 [MB] (446 MBps) Copying: 1024/1024 [MB] (average 460 MBps) 00:26:56.603 00:26:56.603 10:04:50 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:56.603 10:04:50 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:59.143 10:04:52 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:59.143 Validate MD5 checksum, iteration 2 00:26:59.143 10:04:52 -- ftl/upgrade_shutdown.sh@103 -- # sum=bd2dbe8cc33fb17c53feee147924878f 00:26:59.143 10:04:52 -- ftl/upgrade_shutdown.sh@105 -- # [[ bd2dbe8cc33fb17c53feee147924878f != \b\d\2\d\b\e\8\c\c\3\3\f\b\1\7\c\5\3\f\e\e\e\1\4\7\9\2\4\8\7\8\f ]] 00:26:59.143 10:04:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:59.143 10:04:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:59.143 10:04:52 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:59.143 10:04:52 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:59.143 10:04:52 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:59.143 10:04:52 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:59.143 10:04:52 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:59.143 10:04:52 -- ftl/common.sh@154 -- # return 0 00:26:59.143 10:04:52 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:59.143 [2024-06-10 10:04:52.421328] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:59.143 [2024-06-10 10:04:52.421467] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79732 ] 00:26:59.143 [2024-06-10 10:04:52.586514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.143 [2024-06-10 10:04:52.827593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.888  Copying: 510/1024 [MB] (510 MBps) Copying: 931/1024 [MB] (421 MBps) Copying: 1024/1024 [MB] (average 460 MBps) 00:27:03.888 00:27:03.888 10:04:57 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:03.888 10:04:57 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@103 -- # sum=55934c02ff4f518788588811e9287a49 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@105 -- # [[ 55934c02ff4f518788588811e9287a49 != \5\5\9\3\4\c\0\2\f\f\4\f\5\1\8\7\8\8\5\8\8\8\1\1\e\9\2\8\7\a\4\9 ]] 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:05.794 10:04:59 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:05.794 10:04:59 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:05.794 10:04:59 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:05.794 10:04:59 -- ftl/common.sh@130 -- # [[ -n 79608 ]] 00:27:05.794 10:04:59 -- ftl/common.sh@131 -- # killprocess 79608 00:27:05.794 10:04:59 -- common/autotest_common.sh@926 -- # '[' -z 79608 ']' 00:27:05.794 10:04:59 -- common/autotest_common.sh@930 -- # kill -0 79608 00:27:05.794 10:04:59 -- common/autotest_common.sh@931 -- # uname 00:27:05.795 10:04:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:05.795 10:04:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79608 00:27:05.795 killing process with pid 79608 00:27:05.795 10:04:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:05.795 10:04:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:05.795 10:04:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79608' 00:27:05.795 10:04:59 -- common/autotest_common.sh@945 -- # kill 79608 00:27:05.795 10:04:59 -- common/autotest_common.sh@950 -- # wait 79608 00:27:06.731 [2024-06-10 10:05:00.489315] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:06.990 [2024-06-10 10:05:00.508669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.508720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:06.990 [2024-06-10 10:05:00.508756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:06.990 [2024-06-10 10:05:00.508775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.508806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:06.990 [2024-06-10 10:05:00.512357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.512399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:06.990 [2024-06-10 10:05:00.512422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.511 ms 00:27:06.990 [2024-06-10 10:05:00.512442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.512748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.512775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:06.990 [2024-06-10 10:05:00.512796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.246 ms 00:27:06.990 [2024-06-10 10:05:00.512808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.514092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.514157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:06.990 [2024-06-10 10:05:00.514173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.259 ms 00:27:06.990 [2024-06-10 10:05:00.514184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.515482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.515515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:06.990 [2024-06-10 10:05:00.515531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.255 ms 00:27:06.990 [2024-06-10 10:05:00.515542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.529005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.529066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:06.990 [2024-06-10 10:05:00.529088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.396 ms 00:27:06.990 [2024-06-10 10:05:00.529125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.536068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.536126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:06.990 [2024-06-10 10:05:00.536145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.887 ms 00:27:06.990 [2024-06-10 10:05:00.536157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.536267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.536297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:06.990 [2024-06-10 10:05:00.536311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:27:06.990 [2024-06-10 10:05:00.536322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.549492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.549535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:06.990 [2024-06-10 10:05:00.549573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.146 ms 00:27:06.990 [2024-06-10 10:05:00.549593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.562969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.990 [2024-06-10 10:05:00.563026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:06.990 [2024-06-10 10:05:00.563046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.326 ms 00:27:06.990 [2024-06-10 10:05:00.563066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.990 [2024-06-10 10:05:00.576507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.991 [2024-06-10 10:05:00.576585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:06.991 [2024-06-10 10:05:00.576605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.362 ms 00:27:06.991 [2024-06-10 10:05:00.576617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.991 [2024-06-10 10:05:00.589618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.991 [2024-06-10 10:05:00.589677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:06.991 [2024-06-10 10:05:00.589694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.909 ms 00:27:06.991 [2024-06-10 10:05:00.589705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.991 [2024-06-10 10:05:00.589750] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:06.991 [2024-06-10 10:05:00.589775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:06.991 [2024-06-10 10:05:00.589790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:06.991 [2024-06-10 10:05:00.589803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:06.991 [2024-06-10 10:05:00.589815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:06.991 [2024-06-10 10:05:00.589996] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:06.991 [2024-06-10 10:05:00.590025] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4a4fca22-6c25-4c8b-9d9b-1a0085f88bac 00:27:06.991 [2024-06-10 10:05:00.590038] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:06.991 [2024-06-10 10:05:00.590049] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:06.991 [2024-06-10 10:05:00.590059] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:06.991 [2024-06-10 10:05:00.590079] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:06.991 [2024-06-10 10:05:00.590090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:06.991 [2024-06-10 10:05:00.590101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:06.991 [2024-06-10 10:05:00.590135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:06.991 [2024-06-10 10:05:00.590146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:06.991 [2024-06-10 10:05:00.590156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:06.991 [2024-06-10 10:05:00.590170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.991 [2024-06-10 10:05:00.590181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:06.991 [2024-06-10 10:05:00.590194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:27:06.991 [2024-06-10 10:05:00.590206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.991 [2024-06-10 10:05:00.607684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.991 [2024-06-10 10:05:00.607733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:06.991 [2024-06-10 10:05:00.607750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.452 ms 00:27:06.991 [2024-06-10 10:05:00.607761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.991 [2024-06-10 10:05:00.608020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.991 [2024-06-10 10:05:00.608037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:06.991 [2024-06-10 10:05:00.608050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:27:06.991 [2024-06-10 10:05:00.608061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.991 [2024-06-10 10:05:00.669514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.991 [2024-06-10 10:05:00.669594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:06.991 [2024-06-10 10:05:00.669615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.991 [2024-06-10 10:05:00.669627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.991 [2024-06-10 10:05:00.669689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.991 [2024-06-10 10:05:00.669703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:06.991 [2024-06-10 10:05:00.669715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.991 [2024-06-10 10:05:00.669726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.991 [2024-06-10 10:05:00.669842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.991 [2024-06-10 10:05:00.669861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:06.991 [2024-06-10 10:05:00.669887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.991 [2024-06-10 10:05:00.669920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.991 [2024-06-10 10:05:00.669957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.991 [2024-06-10 10:05:00.669974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:06.991 [2024-06-10 10:05:00.669995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.991 [2024-06-10 10:05:00.670013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.250 [2024-06-10 10:05:00.777483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.250 [2024-06-10 10:05:00.777560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:07.250 [2024-06-10 10:05:00.777580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.250 [2024-06-10 10:05:00.777592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.250 [2024-06-10 10:05:00.818442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.250 [2024-06-10 10:05:00.818506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:07.250 [2024-06-10 10:05:00.818526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.250 [2024-06-10 10:05:00.818538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.250 [2024-06-10 10:05:00.818635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.250 [2024-06-10 10:05:00.818653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:07.250 [2024-06-10 10:05:00.818665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.250 [2024-06-10 10:05:00.818689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.250 [2024-06-10 10:05:00.818747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.250 [2024-06-10 10:05:00.818763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:07.250 [2024-06-10 10:05:00.818776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.250 [2024-06-10 10:05:00.818786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.250 [2024-06-10 10:05:00.818908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.250 [2024-06-10 10:05:00.818929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:07.250 [2024-06-10 10:05:00.818944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.250 [2024-06-10 10:05:00.818957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.250 [2024-06-10 10:05:00.819015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.250 [2024-06-10 10:05:00.819033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:07.250 [2024-06-10 10:05:00.819046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.250 [2024-06-10 10:05:00.819057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.250 [2024-06-10 10:05:00.819101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.250 [2024-06-10 10:05:00.819147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:07.250 [2024-06-10 10:05:00.819160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.250 [2024-06-10 10:05:00.819171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.250 [2024-06-10 10:05:00.819235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.250 [2024-06-10 10:05:00.819252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:07.250 [2024-06-10 10:05:00.819265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.250 [2024-06-10 10:05:00.819276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.250 [2024-06-10 10:05:00.819449] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 310.718 ms, result 0 00:27:08.627 10:05:01 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:08.627 10:05:01 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:08.627 10:05:01 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:08.627 10:05:01 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:08.627 10:05:01 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:08.627 10:05:01 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:08.627 Remove shared memory files 00:27:08.627 10:05:01 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:08.627 10:05:01 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:08.627 10:05:01 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:08.627 10:05:01 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:08.627 10:05:01 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79402 00:27:08.627 10:05:01 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:08.627 10:05:01 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:08.627 ************************************ 00:27:08.627 END TEST ftl_upgrade_shutdown 00:27:08.627 ************************************ 00:27:08.627 00:27:08.627 real 1m34.122s 00:27:08.627 user 2m17.454s 00:27:08.627 sys 0m22.624s 00:27:08.627 10:05:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.627 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:27:08.627 10:05:02 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:08.627 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:08.627 10:05:02 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:08.627 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:08.627 10:05:02 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:08.627 10:05:02 -- ftl/ftl.sh@14 -- # killprocess 71853 00:27:08.627 10:05:02 -- common/autotest_common.sh@926 -- # '[' -z 71853 ']' 00:27:08.627 10:05:02 -- common/autotest_common.sh@930 -- # kill -0 71853 00:27:08.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71853) - No such process 00:27:08.627 Process with pid 71853 is not found 00:27:08.627 10:05:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71853 is not found' 00:27:08.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.627 10:05:02 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:08.627 10:05:02 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79860 00:27:08.627 10:05:02 -- ftl/ftl.sh@20 -- # waitforlisten 79860 00:27:08.627 10:05:02 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.627 10:05:02 -- common/autotest_common.sh@819 -- # '[' -z 79860 ']' 00:27:08.627 10:05:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.627 10:05:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:08.627 10:05:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.627 10:05:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:08.627 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:27:08.627 [2024-06-10 10:05:02.130767] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:08.628 [2024-06-10 10:05:02.131139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79860 ] 00:27:08.628 [2024-06-10 10:05:02.295689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.886 [2024-06-10 10:05:02.516804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:08.886 [2024-06-10 10:05:02.517239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.266 10:05:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:10.266 10:05:03 -- common/autotest_common.sh@852 -- # return 0 00:27:10.266 10:05:03 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:10.524 nvme0n1 00:27:10.524 10:05:04 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:10.524 10:05:04 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:10.524 10:05:04 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:10.783 10:05:04 -- ftl/common.sh@28 -- # stores=03adfffa-4b30-4710-9e31-63f1bf921fbd 00:27:10.783 10:05:04 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:10.783 10:05:04 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03adfffa-4b30-4710-9e31-63f1bf921fbd 00:27:11.041 10:05:04 -- ftl/ftl.sh@23 -- # killprocess 79860 00:27:11.041 10:05:04 -- common/autotest_common.sh@926 -- # '[' -z 79860 ']' 00:27:11.041 10:05:04 -- common/autotest_common.sh@930 -- # kill -0 79860 00:27:11.041 10:05:04 -- common/autotest_common.sh@931 -- # uname 00:27:11.041 10:05:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:11.041 10:05:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79860 00:27:11.041 killing process with pid 79860 00:27:11.041 10:05:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:11.041 10:05:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:11.041 10:05:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79860' 00:27:11.041 10:05:04 -- common/autotest_common.sh@945 -- # kill 79860 00:27:11.041 10:05:04 -- common/autotest_common.sh@950 -- # wait 79860 00:27:13.576 10:05:06 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:13.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:13.576 Waiting for block devices as requested 00:27:13.576 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.576 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.576 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.576 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:18.844 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:18.844 Remove shared memory files 00:27:18.844 10:05:12 -- ftl/ftl.sh@28 -- # remove_shm 00:27:18.844 10:05:12 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:18.844 10:05:12 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:18.844 10:05:12 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:18.844 10:05:12 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:18.844 10:05:12 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:18.844 10:05:12 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:18.844 ************************************ 00:27:18.844 END TEST ftl 00:27:18.844 ************************************ 00:27:18.844 00:27:18.844 real 11m44.622s 00:27:18.844 user 14m40.485s 00:27:18.844 sys 1m28.728s 00:27:18.844 10:05:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.844 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:27:18.844 10:05:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:18.844 10:05:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:18.844 10:05:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:18.844 10:05:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:18.844 10:05:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:18.844 10:05:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:18.844 10:05:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:18.844 10:05:12 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:27:18.844 10:05:12 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:27:18.844 10:05:12 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:27:18.844 10:05:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:18.844 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:27:18.844 10:05:12 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:27:18.844 10:05:12 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:27:18.844 10:05:12 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:27:18.844 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:27:20.220 INFO: APP EXITING 00:27:20.220 INFO: killing all VMs 00:27:20.220 INFO: killing vhost app 00:27:20.220 INFO: EXIT DONE 00:27:20.477 lsblk: /dev/nvme0c0n1: not a block device 00:27:20.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:20.734 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:20.734 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:20.734 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:20.734 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:21.308 lsblk: /dev/nvme0c0n1: not a block device 00:27:21.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:21.569 Cleaning 00:27:21.569 Removing: /var/run/dpdk/spdk0/config 00:27:21.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:21.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:21.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:21.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:21.569 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:21.569 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:21.569 Removing: /var/run/dpdk/spdk0 00:27:21.569 Removing: /var/run/dpdk/spdk_pid56733 00:27:21.569 Removing: /var/run/dpdk/spdk_pid56932 00:27:21.569 Removing: /var/run/dpdk/spdk_pid57231 00:27:21.569 Removing: /var/run/dpdk/spdk_pid57335 00:27:21.569 Removing: /var/run/dpdk/spdk_pid57429 00:27:21.569 Removing: /var/run/dpdk/spdk_pid57539 00:27:21.569 Removing: /var/run/dpdk/spdk_pid57640 00:27:21.569 Removing: /var/run/dpdk/spdk_pid57685 00:27:21.569 Removing: /var/run/dpdk/spdk_pid57716 00:27:21.569 Removing: /var/run/dpdk/spdk_pid57783 00:27:21.569 Removing: /var/run/dpdk/spdk_pid57889 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58333 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58410 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58473 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58502 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58630 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58654 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58784 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58813 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58880 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58900 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58964 00:27:21.569 Removing: /var/run/dpdk/spdk_pid58994 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59161 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59203 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59283 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59360 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59392 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59464 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59490 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59537 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59563 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59604 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59630 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59682 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59710 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59751 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59777 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59818 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59850 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59896 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59922 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59969 00:27:21.569 Removing: /var/run/dpdk/spdk_pid59995 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60036 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60066 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60114 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60140 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60181 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60213 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60259 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60285 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60332 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60358 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60399 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60436 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60477 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60503 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60544 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60576 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60621 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60651 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60701 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60730 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60780 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60811 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60852 00:27:21.569 Removing: /var/run/dpdk/spdk_pid60878 00:27:21.827 Removing: /var/run/dpdk/spdk_pid60926 00:27:21.827 Removing: /var/run/dpdk/spdk_pid61007 00:27:21.827 Removing: /var/run/dpdk/spdk_pid61116 00:27:21.827 Removing: /var/run/dpdk/spdk_pid61289 00:27:21.827 Removing: /var/run/dpdk/spdk_pid61387 00:27:21.827 Removing: /var/run/dpdk/spdk_pid61429 00:27:21.827 Removing: /var/run/dpdk/spdk_pid61905 00:27:21.827 Removing: /var/run/dpdk/spdk_pid62050 00:27:21.827 Removing: /var/run/dpdk/spdk_pid62155 00:27:21.827 Removing: /var/run/dpdk/spdk_pid62208 00:27:21.827 Removing: /var/run/dpdk/spdk_pid62239 00:27:21.827 Removing: /var/run/dpdk/spdk_pid62314 00:27:21.827 Removing: /var/run/dpdk/spdk_pid62999 00:27:21.827 Removing: /var/run/dpdk/spdk_pid63041 00:27:21.827 Removing: /var/run/dpdk/spdk_pid63556 00:27:21.827 Removing: /var/run/dpdk/spdk_pid63660 00:27:21.827 Removing: /var/run/dpdk/spdk_pid63770 00:27:21.827 Removing: /var/run/dpdk/spdk_pid63823 00:27:21.827 Removing: /var/run/dpdk/spdk_pid63854 00:27:21.827 Removing: /var/run/dpdk/spdk_pid63885 00:27:21.827 Removing: /var/run/dpdk/spdk_pid65841 00:27:21.827 Removing: /var/run/dpdk/spdk_pid65996 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66000 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66012 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66058 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66067 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66079 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66118 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66122 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66139 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66184 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66188 00:27:21.827 Removing: /var/run/dpdk/spdk_pid66200 00:27:21.827 Removing: /var/run/dpdk/spdk_pid67660 00:27:21.827 Removing: /var/run/dpdk/spdk_pid67767 00:27:21.827 Removing: /var/run/dpdk/spdk_pid67906 00:27:21.827 Removing: /var/run/dpdk/spdk_pid68032 00:27:21.827 Removing: /var/run/dpdk/spdk_pid68153 00:27:21.827 Removing: /var/run/dpdk/spdk_pid68279 00:27:21.827 Removing: /var/run/dpdk/spdk_pid68428 00:27:21.827 Removing: /var/run/dpdk/spdk_pid68508 00:27:21.827 Removing: /var/run/dpdk/spdk_pid68648 00:27:21.827 Removing: /var/run/dpdk/spdk_pid69044 00:27:21.827 Removing: /var/run/dpdk/spdk_pid69086 00:27:21.827 Removing: /var/run/dpdk/spdk_pid69559 00:27:21.827 Removing: /var/run/dpdk/spdk_pid69741 00:27:21.827 Removing: /var/run/dpdk/spdk_pid69842 00:27:21.827 Removing: /var/run/dpdk/spdk_pid69952 00:27:21.827 Removing: /var/run/dpdk/spdk_pid70011 00:27:21.827 Removing: /var/run/dpdk/spdk_pid70042 00:27:21.827 Removing: /var/run/dpdk/spdk_pid70358 00:27:21.827 Removing: /var/run/dpdk/spdk_pid70429 00:27:21.827 Removing: /var/run/dpdk/spdk_pid70509 00:27:21.827 Removing: /var/run/dpdk/spdk_pid70907 00:27:21.827 Removing: /var/run/dpdk/spdk_pid71059 00:27:21.827 Removing: /var/run/dpdk/spdk_pid71853 00:27:21.827 Removing: /var/run/dpdk/spdk_pid71982 00:27:21.827 Removing: /var/run/dpdk/spdk_pid72200 00:27:21.827 Removing: /var/run/dpdk/spdk_pid72300 00:27:21.827 Removing: /var/run/dpdk/spdk_pid72667 00:27:21.827 Removing: /var/run/dpdk/spdk_pid72933 00:27:21.827 Removing: /var/run/dpdk/spdk_pid73312 00:27:21.827 Removing: /var/run/dpdk/spdk_pid73546 00:27:21.827 Removing: /var/run/dpdk/spdk_pid73717 00:27:21.827 Removing: /var/run/dpdk/spdk_pid73783 00:27:21.827 Removing: /var/run/dpdk/spdk_pid73932 00:27:21.827 Removing: /var/run/dpdk/spdk_pid73967 00:27:21.827 Removing: /var/run/dpdk/spdk_pid74034 00:27:21.827 Removing: /var/run/dpdk/spdk_pid74235 00:27:21.827 Removing: /var/run/dpdk/spdk_pid74490 00:27:21.827 Removing: /var/run/dpdk/spdk_pid74920 00:27:21.827 Removing: /var/run/dpdk/spdk_pid75362 00:27:21.827 Removing: /var/run/dpdk/spdk_pid75803 00:27:21.827 Removing: /var/run/dpdk/spdk_pid76331 00:27:21.827 Removing: /var/run/dpdk/spdk_pid76475 00:27:21.827 Removing: /var/run/dpdk/spdk_pid76573 00:27:21.827 Removing: /var/run/dpdk/spdk_pid77251 00:27:21.827 Removing: /var/run/dpdk/spdk_pid77332 00:27:21.827 Removing: /var/run/dpdk/spdk_pid77823 00:27:21.827 Removing: /var/run/dpdk/spdk_pid78250 00:27:21.827 Removing: /var/run/dpdk/spdk_pid78775 00:27:21.827 Removing: /var/run/dpdk/spdk_pid78909 00:27:21.827 Removing: /var/run/dpdk/spdk_pid78966 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79035 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79106 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79181 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79402 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79450 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79529 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79608 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79654 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79732 00:27:21.827 Removing: /var/run/dpdk/spdk_pid79860 00:27:21.827 Clean 00:27:22.085 killing process with pid 48368 00:27:22.085 killing process with pid 48374 00:27:22.085 10:05:15 -- common/autotest_common.sh@1436 -- # return 0 00:27:22.085 10:05:15 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:27:22.085 10:05:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:22.085 10:05:15 -- common/autotest_common.sh@10 -- # set +x 00:27:22.085 10:05:15 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:27:22.085 10:05:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:22.085 10:05:15 -- common/autotest_common.sh@10 -- # set +x 00:27:22.085 10:05:15 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:22.085 10:05:15 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:22.085 10:05:15 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:22.085 10:05:15 -- spdk/autotest.sh@394 -- # hash lcov 00:27:22.085 10:05:15 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:22.085 10:05:15 -- spdk/autotest.sh@396 -- # hostname 00:27:22.085 10:05:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:22.342 geninfo: WARNING: invalid characters removed from testname! 00:27:54.407 10:05:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:54.666 10:05:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:57.951 10:05:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:00.533 10:05:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:03.818 10:05:56 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:06.376 10:05:59 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:08.907 10:06:02 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:08.907 10:06:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:08.907 10:06:02 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:08.907 10:06:02 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.907 10:06:02 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.907 10:06:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.907 10:06:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.907 10:06:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.907 10:06:02 -- paths/export.sh@5 -- $ export PATH 00:28:08.907 10:06:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.907 10:06:02 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:08.907 10:06:02 -- common/autobuild_common.sh@435 -- $ date +%s 00:28:08.907 10:06:02 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718013962.XXXXXX 00:28:08.907 10:06:02 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718013962.5D3GLD 00:28:08.907 10:06:02 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:28:08.907 10:06:02 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:28:08.907 10:06:02 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:08.907 10:06:02 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:08.907 10:06:02 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:08.907 10:06:02 -- common/autobuild_common.sh@451 -- $ get_config_params 00:28:08.907 10:06:02 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:28:08.907 10:06:02 -- common/autotest_common.sh@10 -- $ set +x 00:28:09.166 10:06:02 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:09.166 10:06:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:09.166 10:06:02 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:09.166 10:06:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:09.166 10:06:02 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:09.166 10:06:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:09.166 10:06:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:09.166 10:06:02 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:09.166 10:06:02 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:09.166 10:06:02 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:09.166 10:06:02 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:09.166 + [[ -n 5192 ]] 00:28:09.166 + sudo kill 5192 00:28:09.176 [Pipeline] } 00:28:09.197 [Pipeline] // timeout 00:28:09.203 [Pipeline] } 00:28:09.221 [Pipeline] // stage 00:28:09.227 [Pipeline] } 00:28:09.245 [Pipeline] // catchError 00:28:09.255 [Pipeline] stage 00:28:09.257 [Pipeline] { (Stop VM) 00:28:09.270 [Pipeline] sh 00:28:09.545 + vagrant halt 00:28:13.732 ==> default: Halting domain... 00:28:19.050 [Pipeline] sh 00:28:19.332 + vagrant destroy -f 00:28:23.519 ==> default: Removing domain... 00:28:23.532 [Pipeline] sh 00:28:23.812 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:28:23.822 [Pipeline] } 00:28:23.842 [Pipeline] // stage 00:28:23.849 [Pipeline] } 00:28:23.869 [Pipeline] // dir 00:28:23.876 [Pipeline] } 00:28:23.898 [Pipeline] // wrap 00:28:23.905 [Pipeline] } 00:28:23.923 [Pipeline] // catchError 00:28:23.934 [Pipeline] stage 00:28:23.937 [Pipeline] { (Epilogue) 00:28:23.954 [Pipeline] sh 00:28:24.235 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:30.806 [Pipeline] catchError 00:28:30.808 [Pipeline] { 00:28:30.822 [Pipeline] sh 00:28:31.101 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:31.360 Artifacts sizes are good 00:28:31.368 [Pipeline] } 00:28:31.386 [Pipeline] // catchError 00:28:31.400 [Pipeline] archiveArtifacts 00:28:31.408 Archiving artifacts 00:28:31.574 [Pipeline] cleanWs 00:28:31.585 [WS-CLEANUP] Deleting project workspace... 00:28:31.585 [WS-CLEANUP] Deferred wipeout is used... 00:28:31.590 [WS-CLEANUP] done 00:28:31.592 [Pipeline] } 00:28:31.613 [Pipeline] // stage 00:28:31.620 [Pipeline] } 00:28:31.640 [Pipeline] // node 00:28:31.645 [Pipeline] End of Pipeline 00:28:31.680 Finished: SUCCESS